Archive for the ‘Research Papers’ Category

Contemplating Cultural Boundaries

2013 July 18 Comments off

The Mesh of Civilizations and International Email Flows

Abstract: In The Clash of Civilizations, Samuel Huntington argued that the primary axis of global conflict was no longer ideological or economic but cultural and religious, and that this division would characterize the “battle lines of the future.” In contrast to the “top down” approach in previous research focused on the relations among nation states, we focused on the flows of interpersonal communication as a bottom-up view of international alignments. To that end, we mapped the locations of the world’s countries in global email networks to see if we could detect cultural fault lines. Using IP-geolocation on a worldwide anonymized dataset obtained from a large Internet company, we constructed a global email network. In computing email flows we employ a novel rescaling procedure to account for differences due to uneven adoption of a particular Internet service across the world. Our analysis shows that email flows are consistent with Huntington’s thesis. In addition to location in Huntington’s “civilizations,” our results also attest to the importance of both cultural and economic factors in the patterning of inter-country communication ties.


Changing Mass Priorities: The Link between Modernization and Democracy

(modified from original in cited paper)


Huntington: The Clash of Civilizations


I find it interesting that Huntington’s cultural boundaries are to some degree quantifiable.


See also Culturomics 2.0:

Ansatz: … but first of all, let us begin with the inspiration …

2013 June 11 Comments off

Introduction: The study of physics requires both scientific observation and philosophy. The tenants of science and its axioms of operation are not themselves scientific statements, but philosophical statements. The profound philosophical insight precipitating the birth of physics was that scientific observations and philosophical constructs, such as logic and reasoning, could be married together in a way that allowed one to make predictions of observations (in science) based on theorems and proofs (in philosophy). This natural philosophy requires a philosophical ‘leap’, in which one makes an assumption or guess about what abstract framework applies most correctly. Such a leap, called Ansatz, is usually arrived at through inspiration and an integrated usage of faculties of the mind, rather than a programmatic application of certain axioms. Nevertheless, a programmatic approach allows enumeration of the details of a mathematical system. It seems prudent to apply a programmatic approach to the notion of Ansatz itself and to clarify its process metaphysically, in order to gain a deeper understanding of how it is used in practice in science; but first of all, let us begin with the inspiration.

A more general treatment of the philosophy of physics and the existence of universes Hall, 2013

In physics and mathematics, an ansatz (initial placement of a tool at a work piece) is an educated guess[1] that is verified later by its results.

An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It can take into consideration boundary conditions. After an ansatz has been established (constituting nothing more than an assumption), the equations are solved for the general function of interest (constituting a confirmation of the assumption).

An ansatz is an assumed form for a mathematical statement that is not based on any underlying theory or principle.

An example from physics is the Bethe Ansatz (Müller).

Adamatzky: Slime Mould Tactile Sensor

2013 June 6 Comments off

Figure 7. Physarum’s morphological responses towards mechanical contact. (a) A protoplasmic tube is distorted by a glass capillary placed across the tube. (b) A zone of extensive growth of Physarum under and at the edges of the glass capillary. (c) Two segments of glass capillary placed on top of Physarum, on agar blob, are partly colonised by the slime mould. (d) Physarum colonises plastic disc placed on top of Physarum sheet wrapping agar blob, view from below.

Slime mould P. polycephalum is a single cells visible by unaided eye. The cells shows a wide spectrum of intelligent behaviour. By interpreting the behaviour in terms of computation one can make a slime mould based computing device. The Physarum computers are capable to solve a range of tasks of computational geometry, optimisation and logic. Physarum computers designed so far lack of localised inputs. Commonly used inputs illumination and chemo-attractants and repellents usually act on extended domains of the slime mould’s body. Aiming to design massive-parallel tactile inputs for slime mould computers we analyse a temporal dynamic of P. polycephalum’s electrical response to tactile stimulation. In experimental laboratory studies we discover how the Physarum responds to application and removal of a local mechanical pressure with electrical potential impulses and changes in its electrical potential oscillation patterns.

Andrew Adamatzky

In a series of previous works, see overview in [2], we developed a concept and fabricated experimental laboratory prototypes of amorphous bio-computing devices Physarum machines. A Physarum machine is a programmable amorphous biological computing device experimentally implemented in plasmodium of P. polycephalum. Physarum polycephalum belongs to the species of order Physarales, subclass Myxogastromycetidae, class Myxomycetes, division Myxostelida. It is commonly known as a true, acellular or multi-headed slime mould. Plasmodium is a `vegetative’ phase, a single cell with a myriad of diploid nuclei. The plasmodium is visible to the unaided eye. The plasmodium looks like an amorphous yellowish mass with networks of protoplasmic tubes. The plasmodium behaves and moves as a giant amoeba. It feeds on bacteria, spores and other microbial creatures and micro-particles [30]. The plasmodium’s foraging behaviour can be interpreted as a computation: data are represented by spatial distribution of attractants and repellents, and results are represented by a structure of Physarum’s protoplasmic network. In such speci cation a plasmodium can solve computational problems with natural parallelism, including optimisation on graphs, computational geometry, logic and robot control, see details in

A Physarum machine is programmed by confi gurations of repelling and attracting gradients: chemical substances, temperature and illumination. These quantities are often difficult to localise, which makes a precise, fi ne-grained, input of spatial data into Physarum machines problematic. A tactile input of information could be a solution. Thus in present we evaluate a feasibility of Physarum to act as a tranducer: to transform a tactile stimulation or a mechanical pressure to a distinctive pattern of an electrical activity. We study how parameters of the oscillations change in response to an application and removal of a solid light-weight insulators to Physarum’s protoplasmic tubes or sheet-shaped parts

Plasmodium of Physarum polycephalum was cultivated in plastic lunch boxes (with few holes punched in their lids for ventilation) on wet kitchen towels and fed with oat flakes.

Physarum Machines (YouTube)
http: //

Why My Slime Mold is Better than Your Hadoop Cluster

How brainless slime molds redefine intelligence (Nature)

Lu: Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change

2013 May 31 8 comments

Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change

Abstract This study is focused on the effects of cosmic rays (solar activity) and halogenated molecules (mainly chlorofluorocarbons-CFCs) on atmospheric O3 depletion and global climate change. Brief reviews are first given on the cosmic-ray-driven electron-induced-reaction (CRE) theory for O3 depletion and the warming theory of CFCs for climate change. Then natural and anthropogenic contributions are examined in detail and separated well through in-depth statistical analyses of comprehensive measured datasets. For O3 loss, new statistical analyses of the CRE equation with observed data of total O3 and stratospheric temperature give high linear correlation coefficients >=0.92. After removal of the CR effect, a pronounced recovery by 20~25% of the Antarctic O3 hole is found, while no recovery of O3 loss in mid-latitudes has been observed. These results show both the dominance of the CRE mechanism and the success of the Montreal Protocol. For global climate change, in-depth analyses of observed data clearly show that the solar effect and human-made halogenated gases played the dominant role in Earth climate change prior to and after 1970, respectively. Remarkably, a statistical analysis gives a nearly zero correlation coefficient (R=-0.05) between global surface temperature and CO2 concentration in 1850-1970. In contrast, a nearly perfect linear correlation with R=0.96-0.97 is found between global surface temperature and total amount of stratospheric halogenated gases in 1970-2012. Further, a new theoretical calculation on the greenhouse effect of halogenated gases shows that they (mainly CFCs) could alone lead to the global surface temperature rise of ~0.6 deg C in 1970-2002. These results provide solid evidence that recent global warming was indeed caused by anthropogenic halogenated gases. Thus, a slow reversal of global temperature to the 1950 value is predicted for coming 5~7 decades.

Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change
Qing-Bin Lu
Comments: 24 pages, 12 figures; an updated version
Subjects: Atmospheric and Oceanic Physics (; Atomic and Molecular Clusters (physics.atm-clus); Chemical Physics (physics.chem-ph)
Journal reference: Int. J. Mod. Phys. B Vol. 27 (2013) 1350073 (38 pages)
DOI: 10.1142/S0217979213500732
Cite as: arXiv:1210.6844 []
(or arXiv:1210.6844v2 [] for this version)

See also: Lu: from ‘interesting but incorrect’ to just wrong (Real Climate)

Courtney: Studying the Internal Ballistics of a Combustion Driven Potato Cannon using High-speed Video

2013 May 10 1 comment

Figure 2. Average velocity of cylindrical potato projectiles vs. barrel position for each experimental propellant.

A potato cannon was designed to accommodate several different experimental propellants and have a transparent barrel so the movement of the projectile could be recorded on high-speed video (at 2000 frames per second). Both combustion chamber and barrel were made of polyvinyl chloride (PVC). Five experimental propellants were tested: propane (C3H8), acetylene (C2H2), ethanol (C2H6O), methanol (CH4O), and butane (C4H10). The amount of each experimental propellant was calculated to approximate a stoichometric mixture and considering the Upper Flammability Limit (UFL) and the Lower Flammability Limit (LFL), which in turn were affected by the volume of the combustion chamber. Cylindrical projectiles were cut from raw potatoes so that there was an airtight fit, and each weighed 50 (+/- 0.5) grams. For each trial, position as a function of time was determined via frame by frame analysis. Five trials were taken for each experimental propellant and the results analyzed to compute velocity and acceleration as functions of time. Additional quantities including force on the potato and the pressure applied to the potato were also computed. For each experimental propellant, average velocity vs. barrel position curves were plotted. The most effective experimental propellant was defined as the one which accelerated the potato to the highest muzzle velocity. The experimental propellant acetylene performed the best on average (138.1 m/s), followed by methanol (48.2 m/s), butane (34.6 m/s), ethanol (33.3 m/s), and propane (27.9 m/s), respectively.

Studying the Internal Ballistics of a Combustion Driven Potato Cannon using High-speed Video
E.D.S. Courtney AND M.W. Courtney
1BTG Research, P.O. Box 62541, Colorado Springs, CO, 80962
United States Air Force Academy,
2354 Fairchild Drive, USAF

McKinnon: The spatial structure of the annual cycle in surface temperature: amplitude, phase, and Lagrangian history

2013 May 9 Comments off

Fig. 7. (a) Monthly temperature anomalies in the latitude band 45-50!N from the advection model driven by HYSPLIT trajectories versus observations. (b) The gain and lag of the modeled annual cycle in polar coordinates showing land (X’s) and ocean (O’s) boxes. Neighboring gridboxes are connected via a thin gray line. (c) The gain of the modeled annual cycle across longitude at 45-50!N using a zonal wind (gray) and with the inclusion of the HYSPLIT trajectory information (black), as compared to the observations (dashed). Land regions are indicated by shading. (d) Similar to (c) but for lag.

The climatological annual cycle in surface air temperature, defined by its amplitude and phase lag with respect to solar insolation, is one of the most familiar aspects of our climate system. Here, we identify three first-order features of the spatial structure of amplitude and phase lag and explain them using simple physical models. Amplitude and phase lag (1) are broadly consistent with a land and ocean end-member mixing model, but (2) exhibit overlap between land and ocean, and, despite this overlap, (3) show a systematically greater lag over ocean than land for a given amplitude. Based on previous work diagnosing relative ocean or land influence as an important control on the extratropical annual cycle, we use a Lagrangian trajectory model to quantify this influence as the weighted amount of time that an ensemble of air parcels has spent over ocean or land. This quantity explains 84% of the space-time variance in the extratropical annual cycle, as well as features (1) and (2). All three features can be explained using a simple energy balance model with land and ocean surfaces and an advecting atmosphere. This model explains 94% of the space-time variance of the annual cycle in an illustrative mid-latitude zonal band when incorporating the results of the trajectory model. The basic features of annual variability in surface air temperature thus appear to be explained by the coupling of land and ocean through mean atmospheric circulation.

The spatial structure of the annual cycle in surface temperature: amplitude, phase, and Lagrangian history
Karen A. McKinnon, Alexander R. Stine, and Peter Huybers
Journal of Climate 2013 ; e-View

Alternate Source:

Lehner: Amplified inception of European Little Ice Age by sea ice-ocean-atmosphere feedbacks

2013 May 8 Comments off

Fig. 9. Schematic overview of the feedback loops associated with the Medieval Climate Anomaly-Little Ice Age transition: decreasing external forcing leads to increased sea ice in the Arctic, especially in the Barents Sea. Loop 1: this causes an increased Arctic sea ice export and subsequently an increased import of sea ice into the Labrador Sea. As this sea ice melts, it weakens the Atlantic Meridional Overturning Circulation (AMOC), which in turn reduces the Barents Sea inflow of warm waters, causing further sea ice growth. Loop 2: increased sea ice causes the Barents Sea to become fresher and less dense. Also, wind changes due to elevated sea level pressure (SLP) increase the sea surface height (SSH) in the Barents Sea. As a result of these two processes, the SSH gradient across the Barents Sea opening increases, further reducing the Barents Sea inflow and thereby supporting sea ice growth. Finally, the increased sea ice cover has a direct thermal effect, decreasing surface air temperatures over Northern Europe and an indirect effect by inducing elevated sea level pressure (SLP) that advects cold Arctic air towards Europe.
Amplified inception of European Little Ice Age by sea ice-ocean-atmosphere feedbacks

The inception of the Little Ice Age (~1400-1700 AD) is believed to have been driven by an interplay of external forcing and climate system-internal variability. While the hemispheric signal seems to have been dominated by solar irradiance and volcanic eruptions, the understanding of mechanisms shaping the climate on continental scale is less robust. In an ensemble of transient model simulations and a new type of sensitivity experiments with artificial sea ice growth we identify a sea ice-ocean-atmosphere feedback mechanism that amplifies the Little Ice Age cooling in the North Atlantic-European region and produces the temperature pattern suggested by paleoclimatic reconstructions. Initiated by increasing negative forcing, the Arctic sea ice substantially expands at the beginning of the Little Ice Age. The excess of sea ice is exported to the subpolar North Atlantic, where it melts, thereby weakening convection of the ocean. Consequently, northward ocean heat transport is reduced, reinforcing the expansion of the sea ice and the cooling of the Northern Hemisphere. In the Nordic Seas, sea surface height anomalies cause the oceanic recirculation to strengthen at the expense of the warm Barents Sea inflow, thereby further reinforcing sea ice growth. The absent ocean-atmosphere heat flux in the Barents Sea results in an amplified cooling over Northern Europe. The positive nature of this feedback mechanism enables sea ice to remain in an expanded state for decades up to a century, favoring sustained cold periods over Europe such as the Little Ice Age. Support for the feedback mechanism comes from recent proxy reconstructions around the Nordic Seas.

Amplified inception of European Little Ice Age by sea ice-ocean-atmosphere feedbacks
Flavio Lehner, Andreas Born, Christoph C. Raible, and Thomas F. Stocker
Journal of Climate 2013 ; e-View

Alternate source:

Wang and Zeng: Development of global hourly 0.5-degree land surface air temperature datasets

2013 May 7 8 comments

Figures extracted from a presentation at the AMS 25th Conference on Climate Variability and Change

Land surface air temperature (SAT) is one of the most important variables in weather and climate studies, and its diurnal cycle and day-to-day variation are also needed for a variety of applications. Global long-term hourly SAT observational data, however, do not exist. While such hourly products could be obtained from global reanalyses, they are strongly affected by model parameterizations and hence are found to be unrealistic in representing the SAT diurnal cycle (even after the monthly mean bias correction).

Global hourly 0.5-degree SAT datasets are developed here based on four reanalysis products [MERRA (1979-2009), ERA-40 (1958-2001), ERA-Interim (1979-2009), and NCEP/NCAR (1948-2009)] and the CRU TS3.10 data for 1948-2009. Our three-step adjustments include the spatial downscaling to 0.5-degree grid cells, the temporal interpolation from 6-hourly (in ERA-40 and NCEP/NCAR) to hourly using the MERRA hourly SAT climatology for each day (and the linear interpolation from 3-hourly in ERA-Interim to hourly), and the mean bias correction in both monthly mean maximum and minimum SAT using the CRU data.

The final products have exactly the same monthly maximum and minimum SAT as the CRU data, and perform well in comparison with in situ hourly measurements over six sites and with a regional daily SAT dataset over Europe. They agree with each other much better than the original reanalyses, and the spurious SAT jumps of reanalyses over some regions are also substantially eliminated. One of the uncertainties in our final products can be quantified by their differences in the true monthly mean (using 24 hourly values) and the monthly averaged diurnal cycle.

Development of global hourly 0.5-degree land surface air temperature datasets
Wang and Zeng
Journal of Climate 2013 ; e-View

Kapsch: Springtime atmospheric energy transport and the control of Arctic summer sea-ice extent

2013 May 6 Comments off

Figure S5: Radiative and turbulent flux anomalies at the surface for LIYs from
NCEP-DOE R2. The black line shows the sea-ice concentration (ERA-Interim reanalysis).
a, displayed is the net longwave radiation plus the turbulent fluxes (latent
and sensible; in red) and the net shortwave radiation (green). b, the radiative fluxes
are split into their components but only downwelling longwave (red) and shortwave
(green) radiation are shown together with the latent (dark blue) and sensible (light
blue) heat flux. All time series are based on daily anomalies of LIYs and averaged
over the area indicated by the red box in Supplementary Fig. 2. A 30-day runningmean
filter is applied to all time series.

Springtime atmospheric energy transport and the control of Arctic summer sea-ice extent

The summer sea-ice extent in the Arctic has decreased in recent decades, a feature that has become one of the most distinct signals of the continuing climate change1, 2, 3, 4. However, the inter-annual variability is large—the ice extent by the end of the summer varies by several million square kilometres from year to year5. The underlying processes driving this year-to-year variability are not well understood. Here we demonstrate that the greenhouse effect associated with clouds and water vapour in spring is crucial for the development of the sea ice during the subsequent months. In years where the end-of-summer sea-ice extent is well below normal, a significantly enhanced transport of humid air is evident during spring into the region where the ice retreat is encountered. This enhanced transport of humid air leads to an anomalous convergence of humidity, and to an increase of the cloudiness. The increase of the cloudiness and humidity results in an enhancement of the greenhouse effect. As a result, downward long-wave radiation at the surface is larger than usual in spring, which enhances the ice melt. In addition, the increase of clouds causes an increase of the reflection of incoming solar radiation. This leads to the counter-intuitive effect: for years with little sea ice in September, the downwelling short-wave radiation at the surface is smaller than usual. That is, the downwelling short-wave radiation is not responsible for the initiation of the ice anomaly but acts as an amplifying feedback once the melt is started.

Springtime atmospheric energy transport and the control of Arctic summer sea-ice extent
Marie-Luise Kapsch, Rune Grand Graversen & Michael Tjernström
Nature Climate Change

Zaman: A Bayesian Approach for Predicting the Popularity of Tweets

2013 May 3 Comments off

FIG 7. Graphical model of the Bayesian log-normal-binomial model for the evolution of retweet graphs. Hyper-priors are omitted for simplicity. The plates denote replication over tweets x and users vxj.

We predict the popularity of short messages called tweets created in the micro-blogging site known as Twitter. We measure the popularity of a tweet by the time-series path of its retweets, which is when people forward the tweet to others. We develop a probabilistic model for the evolution of the retweets using a Bayesian approach, and form predictions using only observations on the retweet times and the local network or “graph” structure of the retweeters. We obtain good step ahead forecasts and predictions of the final total number of retweets even when only a small fraction (i.e. less than one tenth) of the retweet paths are observed. This translates to good predictions within a few minutes of a tweet being posted and has potential implications for understanding the spread of broader ideas, memes, or trends in social networks and also revenue models for both individuals who “sell tweets” and for those looking to monetize their reach.

A Bayesian Approach for Predicting the Popularity of Tweets
Tauhid Zaman, Emily B. Fox, Eric T. Bradlow
arXiv:1304.6777 [cs.SI]


Get every new post delivered to your Inbox.

Join 27 other followers