Scientific Scribbles

The voice of UniMelb Science Communication students

Are killer robots really going to take over the world? The answer is maybe

“Artificial intelligence could spell the end of the human race,” the late physicist Stephen Hawking once said. And he wasn’t alone in his concerns. Bill Gates, Elon Musk and Steve Wosniak along with some of the most prominent researchers in the field have expressed their unease at the concept of unchecked AI development. So what are they worried about? And if they’re scared, does that mean we should be too?

 

Let’s start with the least likely scenario and work backwards:

 

Sentient, human-like androids

 

Anyone who’s watched HBO’s Westworld or the classic Blade Runner knows that making a robot think and feel like a human is a very bad idea. While any other “super” AI won’t *necessarily* want to eliminate us unless we get in its way, giving the range of human emotions to what is essentially a superhuman we’ve created to exploit would probably be enough to make any of us want to go on a killing spree. Fortunately, this scenario is the least likely, as while we have managed to create some convincingly human AI, we are a long way from programming human emotion, let alone creating something that has a true sense of “self”.

 

Intelligence “explosion”

 

A slightly more likely scenario is that in which an AI learns how to self-improve or replicate, resulting in exponential self-improvement/replication and an enormous and uncontrollable advantage over humans (referred to as an “intelligence explosion”). One example of this in popular culture would be Skynet, the AI that serves as the primary antagonist of the Terminator franchise. The actions of a superintelligent AI would not necessarily be detrimental to the human race, depending on the core drives of its original code; however it is possible it would see us as a threat to whatever those drives are, or to itself (just like Skynet does).

 

This is still not a likely scenario for the near future, however this possibility was the primary cause of Stephen Hawking’s concern about the future of AI. The physicist maintained that if researchers begin to consider and pre-empt this in their development of AI now, the threat would be significantly lower.

 

Lethal autonomous weapons

 

If you have seen the episode of Black Mirror featuring that terrifying robot dog, you know exactly what a takeover by weaponised robots looks like. And if that episode didn’t frighten you at all, maybe this will: Fully autonomous weapons have already been used, with lethal force, in Israel and South Korea to take out “threats” on their borders. The US and Russia have both been developing their own weaponised autonomous drones and vehicles. And despite widespread public support for a ban on these weapons, changes to international law to do so has been blocked by all of the above countries as well as Australia.

 

Weaponizing AI goes against Asimov’s well known “first law of robotics”: A robot must not injure a human being. Without this safeguard, we are vulnerable to a number of scenarios that don’t require robot superintelligence. Depending on how available and how easily manipulable these weaponised robots are, there is a possibility of someone deliberately programming them to seek out and kill anyone and everyone. Alternatively, and more likely, an automated weapon could be unable to distinguish the enemies from their own side, and would continue to attack indiscriminately. Finally, an error might occur in which what were otherwise specific directions on who to attack are lost.

 

This scenario, or at least one in which autonomous weapons are responsible for mass human loss, is unfortunately not as unlikely as the others above, and will hopefully be prevented through the banning of this kind of AI.

 

Economic takeover

 

Robots are potentially going to be responsible for a large part of the economy, with 47% of US jobs having a high probability of being automated by 2050. Some people are concerned this will lead to widespread unemployment and poverty, even more extreme wealth inequality between regions and countries and overall unbalanced global power due to unequal access to the technology. Others are more hopeful that the automation will result in a shorter work week and higher living standards.

 

Of all the above scenarios, this is the most likely to actually happen. Fortunately, we don’t need to feel so hopeless about this one – I, for one, am going to stop using the self-serve checkouts at Woolworths so much. It’s the little things that count, right?


Brain-eating blobs from Hell

It’s a sweltering summer’s day in outback Australia. Sweat rolls off your forehead in glistening beads, stinging your eyes and salting your tongue. The country radio blares: ‘Keep cool today, folks – we’re heading for a top of 44 degrees’. But not to worry – you’ve just stumbled upon an inviting roadside dam. A lonely gum tree shades the murky water, a tire dangling from one branch. You jump out of the car and dive into the tepid pool without a second thought. Pure bliss! Thoroughly refreshed, you hop back in the ute and speed away. A week later, you’re dead.

A much more inviting pool than the one you saw from the roadside – but probably just as deadly.
Image by yaruman5, via Flickr.

Hot water means hungry zombies

If pathologists were to perform an autopsy on your corpse, they’d find nothing unusual – just a run-of-the-mill bunch of organs. That is, unless they opened up your skull. Why? Meet Naegleria fowleri – better known as the brain-eating amoeba.

Naegleria fowleri banner - trophs under a microscope with contrast
The three life stages of Naegleria – they want your brains!
Image provided by the United States Centers for Disease Control and Prevention (USCDC), via https://www.cdc.gov/parasites/naegleria/.

This unassuming little microbe was discovered in 1965 in a South Australian children’s hospital, when a spate of fatalilities was traced back to Naegleria’s presence in unchlorinated tap water. In the decades since, infections have been reported from countries as far afield as Pakistan and the US. Unlike many of the parasites that afflict the human race, Naegleria doesn’t need a host to survive or carry out its life cycle. Naegleria spends its day-to-day life gulping up bacteria in warm freshwater pools – but when an unsuspecting human hops in for a quick splash, Naegleria sees greener pastures. Why waste energy chasing after tiny microbes when there’s a smorgasbord right in front of you?

Mind on the menu

Naegleria wants to eat your brains, but how does it get to them? Swallowing this creepy amoeba won’t lead to infection, since stomach acid would make short work of any stray Naegleria. Entering via an open wound is also fruitless: the blood-brain barrier would comfortably block access to your grey matter, if the immune system didn’t fight off infection first. To undercut the natural defences of our body, this ingenious invader takes the road less travelled. Its strategy is nothing to sniff at: it climbs into the cranium through your nose.

A schnozz of this size? Naegleria can only dream.
Image by Christine H., via Flickr.

When a swimmer in a Naegleria-infested pool accidentally inhales water, some of the droplets make it all the way up the nostrils – and some Naegleria come along for the ride. Nestled at the top of our nose is a structure called the cribiform plate. This plate is full of tiny holes that support our olfactory nerves; these nerves are crucial for our sense of smell and taste. They’re also the ropes Naegleria uses in its upward trek. After climbing through the cribiform plate, the tiny zombies can sense the biggest feast of their life is coming, and start to multiply.

At this point, it’s game over for the brain. The hungry amoebae whip out their feeding apparatus (a bizarre sucking structure called an amoebostome) and start to munch away. Cell by cell, the brain starts to break down. First the olfactory bulb, then the frontal lobe, and then… death. After a week of headaches, back pain and seizures, the victim draws their last breath. With nowhere else to go, the Naegleria eventually die too – but what an incredible last supper!

Actual recreation of Naegleria eating your brain cells. Image by Paul Gallo, via Flickr.

Beating the brain-eaters

Naegleriasis is incredibly deadly, so it may scare you even more to know that there’s no reliable cure to fight it! Trials with antifungal drugs and whole-body cooling have saved a few patients, but the number of successes could be counted on your fingers. Most survivors also come away from the near-death experience with permanent brain damage (which you might expect after being infected by a parasite that eats your nervous system). Is there any silver lining? Yes!

Naegleriasis is extremely rare – globally, only around 300 cases have been recorded in the five decades since 1965. By contrast, almost as many people drowned in Australia between 2016 and 2017. So long as you avoid shooting unchlorinated water up your nose-holes, and take a noseclip on your next lake swim, the chance of infection is next to none.

This isn’t Night of the Living Dead. Hungry amoebae aren’t about to break into your bedroom and eat you alive. But if you’re ever tempted to cool off in a suspicious-looking pond, think twice about inviting the zombie apocalypse into your nose. 

 

 


Why is the weather forecast always wrong?

You know what I’m talking about. You’ve planned the beach trip for days. Took forever to organise between your friends. Towel packed, sunscreen on, only to arrive to 20 degrees, overcast, and storm clouds in the distance. Why can’t those guys at the weather station do their *** job right???

 

At least, this is what happened to me on the weekend. But before cursing the meteorologists responsible, I had to take a moment and remember that they, too, are simply scientists just trying to make the most accurate predictions they can with the data they’ve got. This is known as scientific modelling, and we use it to make millions of predictions every day.

 

Modelling is inherently uncertain

 

One of the most quoted aphorisms in science is the statistician George Box’s assertion that “All models are wrong, but some are useful.” Models are effectively our way of trying to describe and predict something in theoretical terms that exists in real terms. This means that any given model will be based on two fundamental flaws – first, that we can only include a finite number of variables, and second, that our measurements of these variables are bound to have an element of error to them. And for the atmospheric systems that meteorologists study, small errors can have big consequences.

 

The butterfly effect

 

One butterfly flaps its wings and alters the course of the universe. That’s how the theory goes right? Well, not quite. The butterfly effect actually has its roots in meteorology, where a scientist named Edward Lorenz found that if he rounded his original data slightly differently, his weather predictions based on that data changed dramatically. Now, weather models are created with the help of near constant streams of data from observation points across the surface of the earth and extending up into the atmosphere. If each of these observations are only 99.99% accurate, all of those 0.01% add up, leaving a lot of space for unseen butterflies.

 

Weather data is real-time

 

On top of the possibility of high error rates, weather forecasters have to bring us the *most up to date* forecast possible in order to be accurate. This means that their computers are continuously pumping out new predictions in response to the real time data they are receiving. These predictions still need to be interpreted before they are broadcast however, and as such by the time they are broadcast weather forecasts are likely to be either slightly out of date, or the result of a very hasty human interpretation (which, we all know, is one of the biggest source of error in the world).

 

We are much better than we used to be

 

Having said all of this, weather prediction is far more accurate than it was when Lorenz was waiting 20 minutes for his 1950s supercomputer to process his data that was accurate to within 750km2. With the wealth of data inputs and incredible processing power of today’s computers, our ability to forecast will only increase as technology improves.

 

We often take for granted how often the weather report is actually correct, and instead only notice when it isn’t. But in the end, I’ll bet you still checked the weather before you left the house today didn’t you?


Pointing Fingers at Sexuality

Could a palm reader be trusted to predict your sexuality? According to a new study, hand proportion could be an indicator of sexual preference. However, just like palmistry, and fortune telling in general, the findings of the study should be considered with an abundant and healthy dose of skepticism.

Image by Joy Silver via Flickr


The findings, published in the Archives of Sexual Behaviour, state that if the lengths of a woman’s left index and ring fingers are not equal she is less likely to be straight. Previous studies have assumed that the difference in index-to-ring-finger ratio has to do with exposure to testosterone. Males generally have a longer ring finger while females tend have them equal.

To avoid confounding factors, the study looked exclusively at identical twin pairs. They observed 18 female pairs and 14 male pairs with all having differences in their sexual orientation. The females with non-straight preferences had a greater proportion of ‘masculine’ hands in comparison to their heterosexual counterparts. But given this result it’s still incredibly unlikely you’d be able to predict a person’s sexuality by looking at their hands alone.

First off, the number of participants in the study is too small to make any assumptions about the general population. In the past, the masculine finger length configuration has been linked to athletic prowess and intelligence. But these studies have also lacked reproducibility and therefore may be as fruitless as predicting sexuality when extrapolated to the rest of the population.

Secondly, it’s important to note that the link between testosterone and finger length is far from confirmed. Known as the Manning hypothesis, this idea is used widely in biological and biobehavioural studies, but has not yet been clinically verified. A paper from six years ago completely failed to confirm the hypothesis.

And third, sexuality cannot have a single cause. And is definitely not all due to testosterone. A person’s sexuality is most likely comprised of many influencing factors. Broadly, these factors fall within genetic, epigenetic, and environmental factors and therefore cannot be visualised by simply looking at a person’s hand.

Human sexuality is complex and fluid, and linking it to a simple agent such as testosterone limits our understanding. On top of that, the expectation that gay and bisexual men exhibit more feminine characteristics is deeply rooted in old stereotypes that we should have moved past now it’s 2018.

Instead of seeking to reduce sexuality down to single indicative trait, studies should aim to embrace the complexity of the issue and seek to understand it more completely. Maybe then we can move forward and better accommodate the spectrum of sexual preferences within our society.


Hacking the Brain to boost cognition

Would you be interested in enhancing your brain power effortlessly ?
Would you like easy H1s no worries?
Would you like that juicy 7.0 flat GPA?

Well I know I definitely wouldn’t complain!

Today’s  blog post is going to be discussing an emerging technology , how it has the ability to make this happen and the underlying science behind it .

The Tech

The technology which I was alluding to momentarily ago is known as Vagal Nerve Stimulation and is a type of brain computer interface.

However, what makes this treatment different to traditional brain computer interfaces, is that it does not involve directly stimulating the brain with electrodes . Phew !

In fact the vagus nerve is a nerve that is located in the neck and thus provides an indirect route to modulate brain function .

This makes it a very attractive technology to use as a therapeutic , but also to commercialize.

The tech itself evolved as a treatment for people who have drug resistant epilepsy and depression .

However, the patients soon began reporting increased mood and cognition.

Subsequent research then revealed that this nerve posses brain enhancing capabilities.

Importantly, wireless simulators now exist .

So don’t worry, you don’t need to get under an operating table any time soon for that H1!

 

An image of a brain integrated into a computer network. Photo by Drew Dbenedua via Flickr

The science

The best way to explain how this works is to compare it to meditation.

The stimulation is activating the same nerve in the neck responsible for the practices cognition enhancing benefits.

Studies have shown that the stimulation engages the limbic system of the brain, a system involved in emotional processing , regulation and memory. Just like meditation does.

The research has gone to the point to show that it actually switches the brain into an “alert” state so that it is more reciprocal to incorporating new information into existing networks.
Issues with the technology

Although the technology seems marvelous and the lazy mans way out of meditating for cognitive benefits , I don’t recommend jumping onto a distributor online and purchasing one anytime soon.

The reason for this is that the true underlying mechanism , or long term consequences of using this technology are poorly understood.

You may improve cognition temporarily using it , but may destroy your ability to perceive a unique emotion. Who knows.

Thus , for now, if you really want a way to improve the ability of the brain to compute information without the worry of potential harmful side effects, just meditate.

Honestly, why risk using a technology that is just trying to mimic the benefits of something you can already do by just breathing ?

The decision is up to you !

A wise individual meditating instead of using a stimulation device to enhance cognition with little knowledge long term consequences. Photo by useitinfo via Flickr

But I think we should all keep an eye on up an coming brain computer interfaces regardless.

The rate technology is advancing is non-parallel .

It is just a matter of time before we are hooked up to the main frame with USB hard drives holding our extra memory.

If interested in learning more about Vagus stimulation in the domains of cognition feel free to refer to here.


Number of posts found: 2838

Authors