Permaculture seems to be the way to go to reset the way mankind grows food.
This is a good reading which tries to make the point why this is necessary.
If you wondering what is all about, I suggest you to go through this BBC doc.
Google has made a deal to buy energy at fixed price from a massive wind farm for 20 years.
This is one of the several action which the company is taking to make its own plan, called RE<C, happening. The goal of this plan is to make Renewable Energy cheaper than Coal. According to them, to reach this, they are moving in three different area:
effective policy, innovative technology and smart capital
They are backing up innovative technology, financing really interesting and promising startup which area trying to exploit non conventional energy, like this one for Enhanced Geotermical Power.
They also invested a gazillions (roughly) of money in the Makani start-up, which is developing system to gather energy from tropospheric wind. The company seems to be developing a prototype able to produce 10 KW from high altitude wind.
Tropospheric wind is the same energy source which kitegen KGR is trying to exploit, which, however, is in a more advanced state, given the fact there are building two industrial installations right now in Nothern West of Italy, able to deliver 3 MW (which is 3000 KW). And that is only the beginning and bigger project of massive scale are already planned.
Maybe google should give a second look and throw some money also on the kitegen project?
So Facebook is introduced Places (a service for Geolocalization like foursquare) which you can use to tell where you are and your friends can use to tag you in a particular place, like with the pictures.
I have mixed feelings about this idea of geolocalization. Of course can be a cool way to share really cool place which you have dug out, but constantly revealing where are you in the world can be used badly by other people.
What however this post is all about is the classic applied-to-everybody-on-release-day policy which Facebook adopt each time they introduce a new feature. A normal user, who is not aware of this new feature, will find herself using it (or people using it, like tagging her in place) while she didn’t decide in the first place.
I signed me off. For your info you’ve got to:
An opt-in policy, where the user accept to use the new feature from Facebook would be not only a polite way to deal with the user, but also a professional approach to introducing new feature.
From the BBC News
Around 100 people have rallied outside Google’s California offices to protest against controversial proposals to alter how data is treated over the web.
Pay to play. This the kind of format which is been prioritize on the wireless network. The creation of high-priority web channel on wireless network to transmit the data can really disrupt the internet as we know. Internet must remain Open with no differentiation on speed within two contents should be delivered.
If this is not the case, the innovation will be harmed badly and both consumers and producers will suffer from the smaller possibilities, because big players can instantiate a sort of monopoly.
Google is still defending its own position. Which is very sad and make them evil…
It seems that this proposal from Google-Verizon addresses the net-neutrality (basically, evey content has the same priority on the others, without any kind of classification). I had mixed feelings when I read that blog post, however I’m not an expert so I refrain to judge it good or bad.
The idea to differentiate the network in two sub system with different priority (wireless and landline) it seems odd to me. We are accessing to the same internet and the same content, the only difference is how we connnect to it.
A really well done and thought-through analysis has been done by the EFF guys, which are quite expert of this things. I highly suggest to stop by and read it to have a more clear view about it.
Via Luca De Biase
Disclaimeir: This is a long one. Get a cup of coffee and ten solid mins of your time, otherwise leave now
When I meet new people and I’m asked to introduce what I do, it usually takes at least 3 minutes to give a proper overview. Usually, if the person is interested, I’ll go deeper and deeper, using an onion strategy to explain dive in the details of my subject’s research, going from a very simplified explanation to a more and more precise one.
The interesting thing is the recurrent question which arise at the end of my explanation: Why? What is the reason behind that?
I think this is a very interesting question and I have a personal answer, which I will tell you in no time. However, before that, let me introduce you a bit my research so we are on the same page. Then I will take a risk and try to generalize this to the whole modelling world.
In my Ph.D research I’m modelling the Medium Spiny Neuron of the Basal Ganglia. If you want to know why this is interesting and you want more detailed information about it just go on my academic page, otherwise here let’s just say that I’m investigating how the memory works, trying to shade some lights on the complicated business of memory and learning. In computational neuroscience we have a lot of different data, from physiology spanning to morphology to biochemical pathways. However, all this data usually belongs to quite well defined different area of expertise and they are not integrated. I’m trying to develop a coherent theory which integrate all these areas, which then we can use as a tool to understand the system.
The system I’m studying is not linear, which means a lot of different and concurrent processes influence each other, with different magnitude and at different times dynamics. The network of relations is intricate, and the different delays makes really difficult to have a static representation which can explain the situation.
This is why modelling is useful, and in my case is quantitative modelling. One of the way to try to understand this system is to create a model where we can simulate what’s going on, then run it to try to catch the emergent properties of the system and isolate them. If this approach is successful it will give us the knowledge about how the Medium Spiny Neuron should work in physiological conditions at least about some precise situation.
So what? I heard you said. Well, we have a good representation of what is going on. Which is the main idea of basic research. But there is more, so keep reading.
Let’s say the systems (in this case the Medium Spiny Neuron) can be found in pathological and then in normal conditions.
If we know how to simulate the physiological conditions and the pathological conditions then we have the possibility to understand the difference between the two. This difference or the Δ (delta)
Δ = physiological - pathological, as I like to think about it borrowing a classic mathematical notation, is what differentiate the system in the two conditions.
This can be done also from an experimental point of view. You can replicate the two states using experiments, but given the fact the system is very complex, you can use a lot of different method to force the system into pathological conditions. Usually the system acts like a black box: you know what you put in, you can read what you’re getting out, but you don’t know what’s going on inside the box. In other words you don’t know the Δ, or said in another words you don’t know why the two conditions are different.
Therefore the job of the model is try to open the box.
Now, if you know what is the Δ and why it exists you have also a good indication of what is going wrong in the pathological conditions, what is missing or what is overproduced or, in more general terms, what is the bit you lack between the two systems. Then you have a starting point where to look to patch it.
I think this doesn’t apply only to computational neuroscience, but to all the models that deals with complex systems.
That’s why I think modelling is important. It will be cool to know what are the thoughts of mine 25 readers if they made up to here. Comments are open, as usual.
It works very well.
After upgrading to the latest Android system (2.2) and resetting the phone with a data connection (I’m on O2 bolts on for internet, 7.50 £ per monthe for 500 Mb) the phone works very well.
This morning I just transferred a Pdf file from my computer (which runs ubuntu 10.04.. I mean GNU/Linux …) to the phone using bluetooth.
Just start the service, making the pc and the phone discorevable and then it was done. No black magic.
Now I’m back home, one of my job failed at the cluster. (Read my mail on HDSPA.) My internet is down ’cause virgin is having troubles for two days, right now.
So I’ve just shared the connection from my phone and I’m using it to go online, submit the failed job after correcting the problem and writing this post.
I don’t know what do you think, but I think this is sweet