Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
Display:
I'm a fourth year Physicist at Bristol University and my Masters project is part of the LHC project.

My project is computer based, simulating what they should see if the Higgs Boson has a mass of 165GeV (decaying to two W bosons which each decay to a lepton and a neutrino). I didn't know a huge amount about CMS (the detector) or the programming aspects before I started so it was a big learning curve. With three weeks to go until I have to stop doing practical work I'm reconstructing the Higgs Mass ok, just need to play around with my electron reconstruction and cuts.

I think to be honest a lot of the best real world applications of the LHC will be indirect, much like the Apollo missions contributed a lot to developments in technology not originally intended for the uses it ended up in. The LHC will process so much data every second that in a year it'll be something like 1/6th of ALL of the world's yearly information going through the detector and its computers.

The research that's gone into the various parts of the detectors (calorimeters, silicon trackers, etc) as well as the magnets, computers, triggers (that rapidly choose which of the billions of collisions are worth keeping and which should be chucked), all of this will have other applications, half of which probably haven't even been thought of yet.

So even if the physics discovered doesn't directly contribute practically, the engineering behind it definitely will.

by darrkespur on Thu Feb 21st, 2008 at 01:26:40 PM EST
I'm working at CDF and if the Standard model Higgs is at 165 GeV we will find it first ;-p
although as a Karlsruher I hope of course that CMS will do the big discoveries.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers
by Martin (weiser.mensch(at)googlemail.com) on Thu Feb 21st, 2008 at 01:41:49 PM EST
[ Parent ]
To be honest even now the WW mode is looking very unlikely, that and the ZZ mode look like they might be squeezed into low probability and effectively ruled out of the running. If it is a standard model Higgs, it looks like it'll be ~114-125.
by darrkespur on Thu Feb 21st, 2008 at 03:22:41 PM EST
[ Parent ]
Can you elaborate?

And what if the standard model Higgs isn't found? What are the alternatives?

Once in a graduate course the professor mentioned in passing that the standard model without a symmetry-breaking sector violates unitarity at about 1TeV, so there has to be some new phenomenon, be it the Higgs or something else, before that energy is reached. What is there to that?

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Thu Feb 21st, 2008 at 04:09:22 PM EST
[ Parent ]
I once met a former physicists turned ERP consultant at IBM, who also told me that there was a small energy band between LEP and LHC, that was actually unreachable in LHC (too low for the beam as designed), and there was a tiny chance the Higgs had a mass that would make the LHC miss the finding (but then some boosted linear accelerators should be able to find first)

Pierre
by Pierre on Thu Feb 21st, 2008 at 04:29:27 PM EST
[ Parent ]
That rings a bell, too.

We have met the enemy, and he is us — Pogo
by Carrie (migeru at eurotrib dot com) on Thu Feb 21st, 2008 at 04:39:41 PM EST
[ Parent ]
Oops.
by ThatBritGuy (thatbritguy (at) googlemail.com) on Thu Feb 21st, 2008 at 05:45:29 PM EST
[ Parent ]
I don't think a linear collider will be build any time soon. There is one country in the world which said it would pay a part of it, but likely only a small part and that is Germany, when the gov cancelled a smaller version for DESY Hamburg.

The US recently has cutted financing of R&D. And what was the reason?
The democrats were angry because Bush has vetoed some of their pet projects. As a revenge they cutted spending on some of Bush's pet projects.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers

by Martin (weiser.mensch(at)googlemail.com) on Thu Feb 21st, 2008 at 05:48:42 PM EST
[ Parent ]
There are existing infrastructures like SLAC which keep being upgraded in part with private funding, they are not entirely out of the race.

Pierre
by Pierre on Fri Feb 22nd, 2008 at 05:53:46 AM EST
[ Parent ]
Ahhh, but you are speaking of the ILC, a superconducting juggernaut of some 50km which can still only reach the puny collision energy of 0.5TeV. Why go for such a thing when the much more attractive possibility of CLIC exist? A delicious 3TeV of electron-positron collisions at ultra high luminocity. No troublesome and gradient limiting superconductive cavities either. With 100MV/m we are far above the theoretical limitations of the enemy competition. First beam? Around 2023, optimistically.

(Me? Biased? Naaahhh...)

by someone (s0me1smail(a)gmail(d)com) on Fri Feb 22nd, 2008 at 08:03:47 AM EST
[ Parent ]
Oh dear, what can the matter be?

You can't be me, I'm taken
by Sven Triloqvist on Fri Feb 22nd, 2008 at 08:05:11 AM EST
[ Parent ]
So making the 2023 to an 2035 for real operation, when applying the history of the LHC.
And ILC technology is in operation already, while CLIC needs a lot of additional R&D. DESY's XFEL is operated with cavities good enough for an ILC.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers
by Martin (weiser.mensch(at)googlemail.com) on Fri Feb 22nd, 2008 at 08:36:58 AM EST
[ Parent ]
Basically they have to rule out very masses by proving that they can't see them. experiments at LEP and the Tevatron have put the mass limits between 114 and ~200Gev.

All the standard model higgs (where it's just a higgs boson with no other weird physics) decay into two particles. It depends on the mass of the higgs what these decays are - for instance i'm studying 165Gev which is roughly 2 times the W mass, so in this region this is by far the most likely decay.

So far they've not seen this above the background and the more data they have, the more they can rule it out. The LHC will provide more data at the low (~114) region, where the main decays are b/bbar quarks, photons, tau lepton, c/cbar quarks and two gluons. All of these occur anyway in the detector so it's hard to spot - which is why there has been the least work in this region so far, and why ruling out the less likely WW and ZZ decays has been easier. Although it's the decay that occurs the least, the photon decay is the easiest to spot.

If they don't find the Higgs at any of these energies, it means it's not just a standard model one - i.e. it behaves not just by the laws we already know, but by new physics we've never seen. Supersymmetry and Extra Dimensions are two of the more prominent theories why the Higgs may not be in the standard model region.

by darrkespur on Thu Feb 21st, 2008 at 04:39:30 PM EST
[ Parent ]
That's true. But it is even better. Unless the Higgs has pretty much exactly 116 GeV, so called loop corrections will destroy the model once more, if there are not more particles to shield these effect. As this is a kind of fine tuning people are not heavy with it, there are basically two classes of models introducing new particles.

The one is supersymmetry. This gives to each particle we already know a heavier superpartner, which has the property to cancel the contribution of its original particle in these loops (for physicists not connected to the matter, it makes a boson partner for every fermion and vice versa. In Feynman diagramms fermions come in with a negative sign, bosons with a positive, so the superpartners cancel their normal partners). The proposed particle I wrote in the diary to catlyse fusion is the stau, the superpartner of an even much heavier partner of the electron than the muon. In some parameter space it can be realively long living.

The other is some variation of 'techni color' or 'warped extra dimensions'. This is in general the more 'natural' solution of the problem, but even more than in the case of supersymmetry one would have expected to find deviations of the standard model about 20 years ago.

The good things about the LHC is, that we really enter the region in which these models have to exist, if they should help to solve any problem.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers

by Martin (weiser.mensch(at)googlemail.com) on Thu Feb 21st, 2008 at 05:40:20 PM EST
[ Parent ]
If the super-particle is OPPOSITE--boson for fermion, fermion for boson--then I do not understand how a stau particle--which would be a boson, right?--could replace an electron (a fermion) in a deuterium atom.  

Unless a tau particle is a boson, which would mean I know even less than i thought.  Isn't the tau a fermion like the electron?  

Or is Pauli exclusion irrelevant?  Higher order elements could then get interesting:  Imagine Li with all three (-) particles sitting in the S1 shell.  

But anyway, which is it?  I AM confused.  

The Fates are kind.

by Gaianne on Sat Feb 23rd, 2008 at 03:38:05 AM EST
[ Parent ]
Any negatively charged particle will do. You can even use an anti-proton. Lifetimes may be rather short, however.
Exotic atoms cast light on fundamental questions - CERN Courier
The Paul Scherrer Institut in Villigen has investigated pionic hydrogen (π--p) and deuterium (π--d), and DAFNE in Frascati has investigated their kaonic counterparts. Other no-less-important species include kaonic and antiprotonic helium, which have been studied at the Japanese High Energy Accelerator Research Organization (KEK) and CERN, and yet another exotic variety is formed by the non-baryonic π+- π-(pionium) and πK atoms. Finally, the antihydrogen atom, pbar-e+, which CERN has copiously produced, is in a class of its own owing to its importance for testing the CPT theorem to extremely high precision.

by someone (s0me1smail(a)gmail(d)com) on Sat Feb 23rd, 2008 at 11:07:27 AM EST
[ Parent ]
I suppose Pauli exclusion is irrelevant since we're talking about Hydrogen. But, in addition, having light bosons filling the orbitals would mean that Pauli exclusion doesn't contribute an "exchange energy" to the interaction between various hydrogen atoms, and so you get the benefit not only of smaller atomic radius but also of the ability of the atoms to actually overlap significantly.

We have met the enemy, and he is us — Pogo
by Carrie (migeru at eurotrib dot com) on Sat Feb 23rd, 2008 at 11:15:32 AM EST
[ Parent ]
I mean heavy bosons.

We have met the enemy, and he is us — Pogo
by Carrie (migeru at eurotrib dot com) on Sun Feb 24th, 2008 at 05:14:29 AM EST
[ Parent ]
Martin:
That's true. But it is even better. Unless the Higgs has pretty much exactly 116 GeV, so called loop corrections will destroy the model once more, if there are not more particles to shield these effect. As this is a kind of fine tuning people are not heavy with it, there are basically two classes of models introducing new particles.
Is that "kind of fine tuning" a fixed point of the renormalization group flow? Because, in that case, there's nothing particularly bothersome about the fine tuning: it's an internally dynamically-determined parameter of the model, not an externally finely-tuned parameter.

We have met the enemy, and he is us — Pogo
by Carrie (migeru at eurotrib dot com) on Sat Feb 23rd, 2008 at 11:27:27 AM EST
[ Parent ]
I don't think so. I'm an experimentalist, but its really about the exact Higgs mass, which should be a free parameter in the standard model, otherwise we could stop searching in a range, and it is used by about every theorist who wants to endorse Susy, so it should be a real effect.

Just as an update for you what are the actual issue, why else we believe standard model (SM) is incomplete.

Dark matter exists:

I've seen the picture already elsewhere, but it is from here. This are two galaxies flying through each other. The reddish part are Xrays caused by interaction of the galactic gas. The blue is, where the mass of the galaxies is, detected by gravitational lensing. So it is clear, that most mass in the galaxies is not interacting hadronically or electromagnetic.
In principle this matter has to interact only through gravitation and weak interaction is optional, but as weak interacting particles would have decoupled in the BigBang at pretty much the point that would explain todays dark matter, there is hope, that we can find it in colliders and that it is e.g. the lightest super symmetric particle, which is stable, when assuming supersymmetry to be conserved.

Another thing which the SM has problems to explain are neutrino masses, and why they are so small, though they have mass. (Cosmological structure formation indicates, that dark matter is non-relativistic, so much heavier than neutrinos, which don't contribute a lot). We think they have mass, because oscillation from one flavour into another is detected. Several experiments try to find out more about the mass and the (CKM-like) mixing matrix.

It is not clear by which mechanism the SM would create an excess of matter over antimatter in the Universe as it is seen. There are ideas for such processes and even in the SM one can explain some assymmetry, but only about 10^(-5) of the observed effect. More CP violating phases are needed. Colliders, maybe more the lower energy B fabrics (Japan plans a new one) may help to find it.

Fluctuations in the cosmic microwave background radiation (CMBR) and mesurements of the speed of galxies very far away indicate that most of the mass in the universe is dark energy

Dark energy has a negative pressure (which I personally find mind-boggling) and increases the speed of expansion of the universe. It is not clear if this is just a cosmological constant as once introduced by Einstein to make his formulas consistent with a static universe, or if it is a dynamical thing. One hopes to find out more with better obersertion of the CMBR. It could be, that an extremely prices measurement could find a shifting coppling constant alpha, if it is a dynamical thing.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers

by Martin (weiser.mensch(at)googlemail.com) on Sat Feb 23rd, 2008 at 05:48:34 PM EST
[ Parent ]
Higgs mass

Okay, let me get this straight: if the Standard Model Higgs is only renormalizable for a particular choice of the Higgs' mass, this is not considered a prediction but a flaw of the model. However, if bosonic strings can only be made consistent in 26 dimensions, or superstrings in 11 dimensions and with the help of supersymmetry these thing for which there is zero empirical evidence are considered predictions of string theory and not flaws. Moreover, were a supersymmetric particle discovered, this would be considered evidence of string theory even though supersymmetry doesn't require string theory. You said it right, though:

the exact Higgs mass ... should be a free parameter in the standard model, otherwise we could stop searching in a range, and it is used by about every theorist who wants to endorse Susy, so it should be a real effect.
Since you're an experimentalist I hope you won't take the above personally, but anyway I'll offer you a bet that the Higgs will be found at 116 GeV.
Martin:
Unless the Higgs has pretty much exactly 116 GeV, so called loop corrections will destroy the model once more, if there are not more particles to shield these effect.

Dark Energy

As for Dark energy, I'm going to go with the Cosmological constant until there is any evidence to the contrary. As far as I know, even "exotic matter" can't produce a "negative pressure" stress-energy tensor.

Neutrino Mass

I don't consider throwing in a one (or several) right-handed neutrinos and a KCB mixing matrix a challenge to the Standard Model. "Explaining" the masses of the various particles is a challenge, but as far as I can tell there's no candidate for a theory that does that. [No, String Theory is not it: apart from having a proliferation of vacua, the only way they can get a low-energy spectrum of particles is by assuming they all have zero mass, nobody has a mechanism for supersymmetry breaking and the supersymmetry breaking scale just introduces a whole bunch of new parameters to explain.]

Dark Matter

My guess is as good as any other - but if dark matter only interacts gravitationally it won't be seen at the LHC. Actually, the quantum numbers match a "superheavy right-handed neutrino" too...

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Sun Feb 24th, 2008 at 05:13:22 AM EST
[ Parent ]
I won't bet against you. Indirect experimental evidence has its center value below the LEP limit, which is 114. 116 GeV really looks for the moment to be the best guess even without the renormalization argument.

I don't assume string theory to be a physical theory at all, as they can always shift their parameters in a way, that any (non)observation is explained.
My boss completely dislikes Susy, but we have another prof who is now working since decades to prove it (without success).

It may well be, that the LHC finds only a Higgs (and only after quite a long time of running, when it is so low) and nothing else. I'm not at all sure, there is something else, although there are some less compelling hints. However, Susy and some other models should really be dead, if LHC finds nothing.
I only wanted to give you an overview over the reasons why people are searching at all for other things and not simply sit down and say it is not worth to try, because anyhow nothing else than a complete SM can be expected.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers

by Martin (weiser.mensch(at)googlemail.com) on Sun Feb 24th, 2008 at 08:31:02 AM EST
[ Parent ]
Oh, it is definitely worth a try, I never implied otherwise. In fact, given the accesibility of the energy range and the necessarily ad-hoc nature of the various models of the Higgs sector, it would be unforgivable not to try.

If the SM Higgs is found, with no evidence of physics beyond the SM below 1TeV (including "corrections" due to physics at higher energies), I think it will be safe to say that theoretical high energy physics will have "died of success". There would be no strong case for higher-energy accelerators, leaving aside how difficult it would be to build something to probe the 10TeV range.

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Sun Feb 24th, 2008 at 09:44:27 AM EST
[ Parent ]
One needs something to explain masses, basically a scalar field. There are some theories biulding the scalar field up as a composite of vector fields, but its rather weird.

I think darrkespur was referring to non-standard model Higgses. But as explained elsewhere on the comment section, if the Higgs isn't in the low mass region, there have to be more particles to prevent other problems making the theory inconsistent.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers

by Martin (weiser.mensch(at)googlemail.com) on Thu Feb 21st, 2008 at 05:56:14 PM EST
[ Parent ]
A serious question: what important developments do you think came out of the Apollo project? I hear these kind of claims quite often, but there usually less meat on a closer look.

Someone above mentioned HTML as spin-of of CERN. But should we really believe something similar would not have been invented elsewhere within a few years, had there been no CERN? Especially considering that the 1000's of smart people who work on these projects wouldn't dispappear, but would be working elsewhere, on other projects with potential spin-offs.

by GreatZamfir on Thu Feb 21st, 2008 at 03:02:53 PM EST
[ Parent ]
Just to your second paragraph. Sure I think somebody else would have come up with something similar some years later. But I think the difference in economic impact if the www would have been invented 5 years later is already very big.

And on what project do you think these 1000s of smart people would have worked? Financial innovations? Micronukes?


Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers

by Martin (weiser.mensch(at)googlemail.com) on Thu Feb 21st, 2008 at 03:09:35 PM EST
[ Parent ]
I think they'd probably be working that fart problem Pierre mentioned. It sounds pretty serious, to me.

Il faut se dépêcher d'agir, on a le monde à reconstruire
by dconrad (drconrad {arobase} gmail {point} com) on Thu Feb 21st, 2008 at 03:30:23 PM EST
[ Parent ]
You are not seriously claiming 1000s of smart people wouldn't have had anything useful to do? What would have happened if LHC funding didn't go through? I am quite sure the people involved had other plans, not just the people directly involved but also all the people working for companies that supply to LHC.

As for HTML, why not invert that idea? Perhaps without CERN it would have been invented earlier... The point is that there is so little relationship between CERN's activities and HTML that it seems too strong to claim that without CERN, the WWW would have taken 5 years more.

After all, the Web depends not not just on HTML, but on a whole lot of interdependent technologies, both in hardware and software, that were growing in the 80's.

by GreatZamfir on Fri Feb 22nd, 2008 at 05:16:56 AM EST
[ Parent ]
You underestimate the importance of HTML in creating the web.

Particle physics had progressed so fast since the 1940's that the particle physics community had developed a system of "preprints" in which people circulated drafts of their papers to colleagues at their institutions and others months before they were published in journals. The story goes that Tim Berners Lee got tired of e-mailing documents back and forth to colleagues at CERN and decided to invent HTML and code a bare bones browser to allow him to (we would today way) webcast his research. There is something about the pace of information exchange within CERN and in the particle physics community that supports the idea that HTML might have taken 5 more years to be developed elsewhere (and it would have been some university or other: USENET and the text-based tools to go with it, and GOPHER, developed in that environment).

The large particle physics laboratories do employ thousands of physicists, engineers and programmers specifically for particle physics experiments purposes, and that is a nonnegligeable fraction of the respective academic communities. If the large labs didn't exist these people would be competing for academic jobs elsewhere and it would result in more people going to industry, as well as fewer people getting doctorates.

If LHC funding hadn't gone through, CERN would have stagnated and maybe shrunk. You need far fewer people to run the existing facilities than you do to develop a new facility, and the LHC research programme is much more intense that what can be carried out at the existinc facilities (not that that isn't useful, too, but it's on a smaller scale in terms of people and resources).

Consider CERN and the LHC a Keynesian stimulus package for physics and engineering.

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Fri Feb 22nd, 2008 at 05:26:34 AM EST
[ Parent ]
The key thing about CERN was that the people who work there are spread across the planet a lot of the time: HTML - and more importantly HTTP - were designed to solve exactly the problem of sharing information with a widely dispersed geographical community all of whom would be publishing data. It followed on from gopher in some pretty obvious ways but was much less structured, which is its main beauty.
by Colman (colman at eurotrib.com) on Fri Feb 22nd, 2008 at 05:33:14 AM EST
[ Parent ]
As an aside, it's only now, with people producing content all over the place that the original vision for the web is being fulfilled - the phase of company brochure sites was painful to watch.
by Colman (colman at eurotrib.com) on Fri Feb 22nd, 2008 at 06:02:16 AM EST
[ Parent ]
And we're doing it by working around the shortcomings of the current publicaiton model, as well.
by Colman (colman at eurotrib.com) on Fri Feb 22nd, 2008 at 06:02:55 AM EST
[ Parent ]
Thanks for these elucidations. To make it more general, could I say the idea is more or less "fundamental, difficult research is likely to encounter problems ahead of the rest of society, and is therefore relatively likely to find useful spin-off solutions" ?

After all, it is possible to predict in hindsight that CERN would be perfect to develop a useful hypertext sytsem. But if one wants to use the unexpected, unpredictable benefits of a project as one of the arguments for funding, there has to be a rationale why this particular project or field is especially likely to lead to unexpected benefits.

by GreatZamfir on Fri Feb 22nd, 2008 at 05:56:57 AM EST
[ Parent ]
In addition, "big science" projects tend to have engineering specs just outside what is possible when they are designed. LHC (and, before, LEP) have required faster electronics than existed at the time they were designed, efficient cryogenics, superconducting magnets, and so on. In that way, CERN drives technology development just like, say, the specs for the next generation of high-speed trains or the Shinkansen do. The same is true of NASA's plans for the next generation of space telescopes (including gravitational wave detectors).

So, big science drives technological developments in established fields, as well as occasionally resulting in new technology. [I distinguish two basic modes of technological progress: secular improvements in technology and new technologies - only the latter qualifies as "innovation" IMHO, and that is not predictable in the way that one can use, say, Moore's law when designing the specs of a computer system to be deployes 5 years in the future.]

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Fri Feb 22nd, 2008 at 06:03:38 AM EST
[ Parent ]
A bit off-topic, but the improvement/innovation distinction is another view I am rather sceptical about. If you zoom in on the 'improvements', you usually see the same picture again: Some of the improvements are seen as radical changes in the field itself, some still look as gradual improvements. Zoom in on the gradual improvements, same picture: what looks as gradual improvement from the outside, is unexpected innovation closer up.

I would argue it's innovation all the way through. Some improvements change a subfield, and from the outside it looks as gradual, expected improvement. Some change a field, and the outside world can notice it and say it's something fundamentally different.  

by GreatZamfir on Fri Feb 22nd, 2008 at 07:07:32 AM EST
[ Parent ]
Well, actually, from the point of view of I/O models of the economy there's a distinction between whether an advance just changes the productivity/cost coefficients of the model, or changes its dimensionality by adding a new process or a new product.

The difference between the dynamical systems we are used to considering in physics and biological or economic evolution is the possibility of the system of differential/difference equations changing dimensionality in response to processes within the system itself.

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Fri Feb 22nd, 2008 at 07:30:00 AM EST
[ Parent ]
I would consider this more an artifact of the modelling than a fundamental point about reality. After all, how do you determine when a new product adds a dimension, or changes existing coefficients? As long as a product is perfect replacement of some existing product, only better along an existing axis, that's easy.

But in reality, new products/inventions, even improvements on existing ones, are usually not that simple. They add an extra dimension, more freedom to find better solutions to problems. But in a high-level, low dimensional description, this freedom can be collapsed into a change in parameters, or really added as extra dimension, if the effects are important enough.

Funny thing is, I am currently working on shape optimization, where it is completely natural to change the number of parameters used to describe the shape, and thus the dimension of the problem.

A related field is order reduction, where you try to (locally) approximate a physical phenomenon by its most important modes. If there is a change in the physics, you can either modify the modes, but keep the same number of them, or you might find that for the new situation more modes are required to describe it well enough.

I would suggest this is a good analogy for your innovation/improvement distinction

by GreatZamfir on Fri Feb 22nd, 2008 at 08:07:51 AM EST
[ Parent ]
Well, a new dimension corresponds to a new manufacturing process, with different inputs. As long as there is substitutability you don't have "true" innovation.

I am familiar with dimension reduction (proper orthogonal modes, principal componets, factor analysis...) and you're right, at some level the number of variables is a matter of choice. But you still have to be able to close the system of equations. You can always ascribe the effect of all the neglected modes to "noise", though.

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Fri Feb 22nd, 2008 at 03:10:26 PM EST
[ Parent ]
well you could say that they would create something similar if they were something else and that might be true but without the funding and supply of such a project they wouldn't be able to have the freedom of a living to develop these things. There's also a great deal of cross-collaboration in these things - if they aren't working in science or are working in smaller projects the chances of coming up with something spectacular are almost certainly lower.

One of the main things coming out of Apollo etc IIRC was the computer development of chips for the project. Large advances in microchips and material science filtered out to the outside world. Whilst industry might have got there as well, I'd say almost certainly it would get there slower, due to the very nature of business - a business looking at short term profit is far less likely to allow their researchers the time and space to create a bigger, more long-life project with its associated spinoffs.

Early research is expensive mainly because you don't know what the right solution is - it could be any number of different options and until you pick it you don't know, so there has to be a lot of investment without too much pressure on results immediately or in every route as a lot of them will be blind alleys - but without checking, you'll never know whether they are the right one or not.

by darrkespur on Thu Feb 21st, 2008 at 03:17:08 PM EST
[ Parent ]
But then the million dollar question is, why did the US spend the money on Apollo, and not directly on chip research? Especially as guided missiles needing chips were not exactly unimportant outside of the Apollo/manned space flight program.
by GreatZamfir on Fri Feb 22nd, 2008 at 05:01:44 AM EST
[ Parent ]
Three related reasons: Sputnik angst, there was a race with the Soviet Union in every field, and areospace technology development for military purposes.

But the fact is, the Apollo program was a one-shot thing. It was wound down and the US lost its ability to fly to the moon. It also discontinued its high-payload rockets in favour of the Space Shuttle, so now the rocket market is occupied by the European Ariane and the Russian Proton.

The Soviet manned space program made more scientific and technical sense than the American one, and the ISS owes more to the Russian Soyuz and Mir than to the American Skylab, which was also discontinued and folded into the Shuttle.

We have met the enemy, and he is us — Pogo

by Carrie (migeru at eurotrib dot com) on Fri Feb 22nd, 2008 at 05:12:22 AM EST
[ Parent ]
Yeah, I know. I am a final year student aerospace engineering, so I have heard my fair share of space histories...

One sidestory that I found particularly intriguing was a note between, I think, McNamara and Lyndon Johnson, in the early 60s. In it they discuss the budget surplusses they are expecting for the late '60s, and they fear Congress will call for tax reductions before they can use the surplusses for their Great Society plans. So in the meantime they think Apollo a good and popular method to keep the budget balanced until they have better things to do with the money. Then came Vietnam...

But more on topic, the whole 'spin-off' concept seems pretty much invented by NASA in later years to justify their budgets, and it is used for so many 'big science & engineering' projects when the wider public has doubts about the costs.    

by GreatZamfir on Fri Feb 22nd, 2008 at 05:38:00 AM EST
[ Parent ]
(1) Spend money on chip research for what? What would the chips be used for? The great proliferation of electronics came on the back of very advanced requirements for components for space programs, etc. One could argue that only once they had been developed for such purposes were it possible to consider their use for more mundane matters. The personal computer only became possible with a maturation of integrated circuit technology, computation infrastructure, and computational techniques that allowed for cheap mass manufacture. The driver for this technology were expensive research programmes in fields requiring processing of large data sets, such as say high energy physics research. Forget about the direct spinoffs, I would argue that the influences of these expensive programmes are far more subtile.  Technological diffusion requires that the basic building blocks are already lying about looking for a different application. You don't start developing processor technology because you think that in 20 years you'll be able to make a machine to play games on.

(2) Missile programmes you say? Because military programmes with large destructive potential are soooo useful while high energy physics, space exploration and the like are vanity projects! And you know how the military loves to share the technology it develops and would not like to keep it secret. One of the great advantages of the large lab high energy physics environment is exactly that it is not a military programme. We don't build things to kill people, I think this is a plus. Further, there is not a tendency to keep progress secret. Quite the opposite in fact, thus a greater chance that progress made here can defuse faster and wider.

(Disclosure, I work at CERN.)

by someone (s0me1smail(a)gmail(d)com) on Fri Feb 22nd, 2008 at 06:06:35 AM EST
[ Parent ]
In other words (and this ties in with my comments on the HTML subthread), technological progress is largely demand-driven. If you want progress you have to create demand for advanced technology. You can choose the form your keynesian stimulus will take: will it be bis science or big guns? And other public spending is also in the same category? Do you want to drive development of medical treatment? Improvements in construction techniques and materials? Improvements in transportation technology? Energy technology? The way to do this is to publicly fund projects which push the boundaries of what's possible. The private sector could do this, too, but they can't afford the solvency risk of sinking money into failed research. The public sector can. It's just a matter of priorities.

We have met the enemy, and he is us — Pogo
by Carrie (migeru at eurotrib dot com) on Fri Feb 22nd, 2008 at 06:42:09 AM EST
[ Parent ]
I think Apollo was a product, not a cause. After Sputnik there was a massive push towards science and engineering in the US, and Apollo fell out of that. So did most of the computer industry.

There's very little evidence to suggest that Apollo contributed directly to electronic design. The first patent for single-chip microcircuitry was granted in 1958. Computers with modular logic were built around the same time. Putting the two together was an next obvious step, and would have happened anyway.

Apollo was mostly a PR exercise for US science and engineering. There may have been some spin-offs in materials science and - obviously - rocket science. But Apollo hasn't left much of a trace in computer science history.

In fact it's rarely mentioned at all. Projects like SAGE, which was the first generation US air defence network, were much more important. NASA did buy a big IBM System/360 for Apollo, but System/360 was already around, and IBM were more interested in selling it as a tool for airline bookings than managing a space program with it.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Fri Feb 22nd, 2008 at 08:35:45 AM EST
[ Parent ]
One bit of hearsay lore that I picked up somewhere (probably on TV) is that the physical space constraints inherent in spacecraft design prompted Apollo scientists and related engineers in various indsutries to work on making things like transistors work in a practical setting, as the existing vacuum-tube technologies were simply too big.
by Zwackus on Tue Feb 26th, 2008 at 12:51:39 AM EST
[ Parent ]
I don't think it's about volume, weight is more likely, and I think it was mainly the Minuteman program that really required them. But I would suggest this was only a slight influence. People tried to build integrated circuits all through the 50s,and the first succesful ones  were somewhere around 1960. So there might have been a few years  when rocket programs were the main users,  between their development and first commercial use in the mid-60s.

Keep in mind that the big advantage of ICs, even those years, was the possibility to get prices down through mass production. Not really something the space program, or even Minuteman was very concerned about.

by GreatZamfir on Tue Feb 26th, 2008 at 03:57:56 AM EST
[ Parent ]
GreatZamfir:
Someone above mentioned HTML as spin-of of CERN. But should we really believe something similar would not have been invented elsewhere within a few years, had there been no CERN?

That's a really hard question to answer. HTML didn't happen directly because of CERN, but it happened because CERN was an environment in which a quick mark-up system would be instantly useful, and because there was no need for 'research' to invent anything more complicated.

There were many, many alternatives to HTML, including horrible things from academia that are best forgotten.

I know people who were researching them, and while they were often better than HTML in many ways - e.g. no broken links - they were also wretchedly overcomplicated, with limited public appeal.

So HTML might well have never happened in its current form. We could easily have had some kind of Windows-ish or other system of gargantuan complexity and slowness.

If you look at academic vs 'public' computing there's a clear pattern of highly abstracted command line tools in academia (e.g. Latex), and much simpler WYSIWYG colouring-book computing in the public area.

HTML broke that pattern by doing something script-ish but relatively simple inside academia, which subsequently escaped into the wild.

That hasn't really happened before, which I think means it's not something that could be relied on.

Or in other words - it's likely CERN got lucky.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Thu Feb 21st, 2008 at 06:01:00 PM EST
[ Parent ]
The semiconductor industry would likely be a few decades behind its current state if there had not been military and space applications for the transistor when it was invented. The reason is the gap between military and commercial viability, defined mostly by cost, and arguably by transistor size (and thus integrated circuit complexity) as well. That gap was filled with public money in the form of the US military budget. The industry grew, and at some point commercial viability started to grow out of that, and henceforth the industry could be sustained as such.

Had that scenario not occurred the industry would not have existed until advances in other sciences / industries would have created commercial viability for it indirectly.

you are the media you consume.

by MillMan (millguy at gmail) on Thu Feb 21st, 2008 at 06:05:40 PM EST
[ Parent ]
I'm not sure it's a clean as that.

Mainframe computing was established by the late 50s, and mini-computing was just starting up. The market was already worth $billions by then. There were some prestige military projects - e.g. SAGE again - and a lot of DARPA funding for research. But the civilian market was already huge, with its own momentum.

Once TI introduced TTL logic in the early 60s, computers became a lot cheaper. At the same time a strong hobbyist culture fanned by magazines kept interest in technology running very high, so there was a steady stream of wannabe engineers with experience of digital techniques from their early teens.

Microprocessors were already being planned in the mid-60s. The biggest gap was between commercial computing and the microprocessor market, and that was bridged by developing a general purpose microprocessor and putting it into a commercial product - a dekstop calculator. It wasn't a military project.

Now you had hobbyist/hacker culture with access to microprocessors and a background of DARPA funded interface and networking research.

The rest was probably inevitable.

What's astonishing is how fast it happened. Most of the core ideas - laptops, databases, the web, GUIs and interactivity, distributed processing, networking, 3D graphics - appeared between 1958 and 1968.

There's been very little genuinely new since then. Most of what's happened has been faster and cheaper, but not so truly innovative.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Fri Feb 22nd, 2008 at 09:15:00 AM EST
[ Parent ]
It would be interesting to compare commercial to military / government revenues over time. I should study the topic further because it occurs at the intesection of several topics I'm interested in.

The early commercial viability of mainframes is a good point that I managed to forget. I'll still make my vague 20 year claim, though.

I agree that it all happened shockingly fast.

Most of what's happened has been faster and cheaper, but not so truly innovative.

I disagree. Reading IEEE mags since I became an EE major in college, there has been some stunning work over the years in the semiconductor physics realm that has been required to get the commercially viable transistor sizes we have today. From the computing point of view, though, I agree what you're saying.

you are the media you consume.

by MillMan (millguy at gmail) on Fri Feb 22nd, 2008 at 01:22:41 PM EST
[ Parent ]
There was once a plan for a really huge (I think 80 km) collider in the USA. They had already hired 2000 people. Then the program was canceled. Quite a number of these scientists made their way into finance and made complex derivatives a lot more popular.

So without CERN Europe may have simply more financial ABS, CDO, CDS, SIV soup. In some countries this counts as productivity increase, but on this I prefer the Internet.

Der Amerikaner ist die Orchidee unter den Menschen
Volker Pispers

by Martin (weiser.mensch(at)googlemail.com) on Fri Feb 22nd, 2008 at 09:21:12 AM EST
[ Parent ]
Yeah, this guy was canned in 1993. The media at the time was promoting it as the perfect example of government waste.

you are the media you consume.

by MillMan (millguy at gmail) on Fri Feb 22nd, 2008 at 01:24:34 PM EST
[ Parent ]
An interesting episode in the killing of the SSC was how, during the Congressional hearings, a prominent physicists (someone big, like John A. Wheeler or Murray Gell-Mann or Steven Weinberg) was asked by a congressman whether the LHC would provide evidence of the existence of God. The negative answer did not help.

We have met the enemy, and he is us — Pogo
by Carrie (migeru at eurotrib dot com) on Fri Feb 22nd, 2008 at 03:05:41 PM EST
[ Parent ]
Nice try, but you didn't get the second LHC->SSC.
Deleting your own comments! I see what the frontpagers are up to now... For shame!
by someone (s0me1smail(a)gmail(d)com) on Fri Feb 22nd, 2008 at 03:09:24 PM EST
[ Parent ]
I was only 16 at the time and not particularly into current events, but it was clearly a political circus if there ever was one.

you are the media you consume.

by MillMan (millguy at gmail) on Fri Feb 22nd, 2008 at 03:10:18 PM EST
[ Parent ]

Display:

Occasional Series