My Favourite Movies: The Magnificent Seven
My love of Westerns can easily be traced back to my father who was a huge fan of the genre. This particular example is, in my opinion, one of the best of its type. Based on the Japanese classic The Seven Samurai it tells to story of an oppressed Mexican village who seek help from American gunfighters down on their luck. Fortune is with them when they hire Chris – played iconically by Yul Brynner – to find the men they need. Each of the six additional gunfighters are introduced in cameo scenes that give an insight into their character as well as their character flaws. They are all, including Brynner himself, lost souls who have spent their lives being the best at what they do (with the noted exception of the youngest member Chico) that of killing rather than being killed. Now, as guns for hire, they have the opportunity to reflect on their profession and wait for their opportunity to redeem a part of their humanity buried under years of brutality.
The reason I think that this movie has stood the test of time and remained one of my favourite films is that it is much more that a simple cowboy film. It’s a film about life choices, it’s about regrets and above all else it’s about honour. The two main characters – played by Brynner and McQueen (pictured above) - are, despite their backgrounds, men steeped in ideals of honour. Despite the fact that they are being paid hardly anything at all they put their lives on the line and even return to the fight because of their agreement with the peasants. This is the fact that so confounds the bandit leader played by Eli Wallach because he is a man singularly without any idea of honour – as an aside I was most impressed by the fact that when asked Chris failed to answer the question why they came back, underlying the fact that Wallach should have known.
For years after seeing this film I simply wanted to be the Yul Brynner character. I guess in some way he became one of my childhood heroes – for reasons I’m only now beginning to understand. It obviously struck a chord with other people too when ‘Chris’ was resurrected in robot form – as an unstoppable killer – in the classic 70’s Sci-Fi Westworld. As a standard western The Magnificent Seven is a classic of its type but, digging just a little deeper, it is also much more than that. Watched with a critical eye it’s about the choice of virtue over vice, good over evil. It’s just so much more than a cowboy film.
Welcome to the thoughts that wash up on the sandy beaches on my mind. Paddling is encouraged.. but watch out for the sharks.
About Me
- CyberKitten
- I have a burning need to know stuff and I love asking awkward questions.
Monday, November 30, 2009
Saturday, November 28, 2009
The Moral Dead Zone
by Robert C. Koehler for Common Wonders
Thursday, January 15, 2009
"Mr. Ban said too many people had died and there had been too much civilian suffering." That almost bears repeating, but I won't because I don't believe it. Too many? In the moral dead zone of the human heart, perennially justified as "war" (evoking honor, triumph, glory), there's no such thing as too much suffering. There's no bleeding child or shattered family or contaminated water supply that can't be overlooked in the name of some great goal or strategic advantage, or converted to fodder for the next round of hatred, revenge and arms purchase. Ban Ki-Moon, the U.N. secretary general, about to embark on a peace and diplomacy tour of the Middle East, was speaking, of course, about the hellish conditions in the Gaza Strip, pummeled by Israel with modern weaponry and Old Testament fury for the last three weeks. Vengeance is mine, sayeth the coalition government. Close to a thousand have died. Many more thousands have been injured or displaced. Too many?
No. Not even close. If too many had died - if hell had reached its capacity, or some other limit had at last been achieved - something would change. The collective enterprise of human violence would convulse and start malfunctioning. Fear, perhaps, would mutate into courage, anger into forgiveness, hatred into love. Or at least we would start looking at what we're doing . . . how do I say this? With evolved compassion? With an understanding, with a determination to survive, we now disdain and mock? Israel's invasion of Gaza is the world's spotlight war right now, reaping headlines, global censure, a special endorsement from the U.S. Congress and, apparently, an audiotape hiss from Osama bin Laden, possibly from beyond the grave. What all of these reactions do, it seems to me, is confer an unwarranted special status on the war, as though it were isolated, without a context any deeper than its accompanying propaganda. This forces us to try to understand the war strictly on its own terms - who started it? who's the bad guy? who's innocent? - rather than as an occurrence within a larger, dysfunctional system as deep as human history and as wide as planetary politics.
This war, and the nine or 10 other armed conflicts officially classified as wars that are going on right now - including wars in the Democratic Republic of the Congo (4 million dead since 1997), Darfur-Sudan (500,000 dead since 2003), Somalia (400,000 dead since 1988), Sri Lanka (80,000 dead since 1983), and of course Iraq (possibly a million or more dead) and Afghanistan (35,000 dead) - whatever they are on their own terms, are also symptoms of a human syndrome of self-destruction. So are the local conflicts on city streets and other jungles that are too small to be called wars. So are the horrific aftermaths of conflicts that have officially ended, including poisoned environments, the ruined health of participants and bystanders, unexploded mines and bombs, the psycho-spiritual traumas that never go away, and the grievances that fester from generation to generation. What links them in an immediate way is the global arms industry, as corrupt as it is invisible, which does a trillion dollars worth of business annually worldwide, is crucial to every major economy and is therefore served, either with overt collusion or discreet silence, by governments and the mass media. But the problem is bigger than mere greed. The business of war, like war itself, defies rational control and containment because it is fed by the paradox of human fear. As we arm to protect ourselves and fight back, our enemy also arms, and thus is born, over and over again, the cycle of escalation, from which the cynical can profit handsomely. The industry of war is self-perpetuating. It should come as no surprise, therefore, that, as Anup Shah noted recently in an essay on the arms industry for GlobalIssues.org, "The top five countries profiting from the arms trade are the five permanent members of the United Nations Security Council: the U.S.A., U.K., France, Russia and China." Thus world peace - at least the sort of peace that most of us envision, which is sustained by international cooperation and universal disarmament rather than subjugation and the capacity for hair-trigger retaliation - would challenge the status quo of the world's largest economies, as they have come to constitute themselves.
As long as we stay trapped in the paradox of fear, we can't even use our intelligence to save ourselves. We have employed it to serve only our self-destruction. The ultimate paradox is that the military industrial complex, that highest of high-tech human endeavors, about which Dwight Eisenhower sounded the alarm nearly half a century ago, is wedded to the most primitive of human emotions. We have become trapped in our collective reptile brain. Only if we disarm our intelligence do we have a chance to find wisdom. And only wisdom can save us.
[Enough said, I think.]
by Robert C. Koehler for Common Wonders
Thursday, January 15, 2009
"Mr. Ban said too many people had died and there had been too much civilian suffering." That almost bears repeating, but I won't because I don't believe it. Too many? In the moral dead zone of the human heart, perennially justified as "war" (evoking honor, triumph, glory), there's no such thing as too much suffering. There's no bleeding child or shattered family or contaminated water supply that can't be overlooked in the name of some great goal or strategic advantage, or converted to fodder for the next round of hatred, revenge and arms purchase. Ban Ki-Moon, the U.N. secretary general, about to embark on a peace and diplomacy tour of the Middle East, was speaking, of course, about the hellish conditions in the Gaza Strip, pummeled by Israel with modern weaponry and Old Testament fury for the last three weeks. Vengeance is mine, sayeth the coalition government. Close to a thousand have died. Many more thousands have been injured or displaced. Too many?
No. Not even close. If too many had died - if hell had reached its capacity, or some other limit had at last been achieved - something would change. The collective enterprise of human violence would convulse and start malfunctioning. Fear, perhaps, would mutate into courage, anger into forgiveness, hatred into love. Or at least we would start looking at what we're doing . . . how do I say this? With evolved compassion? With an understanding, with a determination to survive, we now disdain and mock? Israel's invasion of Gaza is the world's spotlight war right now, reaping headlines, global censure, a special endorsement from the U.S. Congress and, apparently, an audiotape hiss from Osama bin Laden, possibly from beyond the grave. What all of these reactions do, it seems to me, is confer an unwarranted special status on the war, as though it were isolated, without a context any deeper than its accompanying propaganda. This forces us to try to understand the war strictly on its own terms - who started it? who's the bad guy? who's innocent? - rather than as an occurrence within a larger, dysfunctional system as deep as human history and as wide as planetary politics.
This war, and the nine or 10 other armed conflicts officially classified as wars that are going on right now - including wars in the Democratic Republic of the Congo (4 million dead since 1997), Darfur-Sudan (500,000 dead since 2003), Somalia (400,000 dead since 1988), Sri Lanka (80,000 dead since 1983), and of course Iraq (possibly a million or more dead) and Afghanistan (35,000 dead) - whatever they are on their own terms, are also symptoms of a human syndrome of self-destruction. So are the local conflicts on city streets and other jungles that are too small to be called wars. So are the horrific aftermaths of conflicts that have officially ended, including poisoned environments, the ruined health of participants and bystanders, unexploded mines and bombs, the psycho-spiritual traumas that never go away, and the grievances that fester from generation to generation. What links them in an immediate way is the global arms industry, as corrupt as it is invisible, which does a trillion dollars worth of business annually worldwide, is crucial to every major economy and is therefore served, either with overt collusion or discreet silence, by governments and the mass media. But the problem is bigger than mere greed. The business of war, like war itself, defies rational control and containment because it is fed by the paradox of human fear. As we arm to protect ourselves and fight back, our enemy also arms, and thus is born, over and over again, the cycle of escalation, from which the cynical can profit handsomely. The industry of war is self-perpetuating. It should come as no surprise, therefore, that, as Anup Shah noted recently in an essay on the arms industry for GlobalIssues.org, "The top five countries profiting from the arms trade are the five permanent members of the United Nations Security Council: the U.S.A., U.K., France, Russia and China." Thus world peace - at least the sort of peace that most of us envision, which is sustained by international cooperation and universal disarmament rather than subjugation and the capacity for hair-trigger retaliation - would challenge the status quo of the world's largest economies, as they have come to constitute themselves.
As long as we stay trapped in the paradox of fear, we can't even use our intelligence to save ourselves. We have employed it to serve only our self-destruction. The ultimate paradox is that the military industrial complex, that highest of high-tech human endeavors, about which Dwight Eisenhower sounded the alarm nearly half a century ago, is wedded to the most primitive of human emotions. We have become trapped in our collective reptile brain. Only if we disarm our intelligence do we have a chance to find wisdom. And only wisdom can save us.
[Enough said, I think.]
Friday, November 27, 2009
Thursday, November 26, 2009
Just Finished Reading: Faith in the Age of Reason by Jonathan Hill
This little volume has been sitting on one of my shelf units for some time now. I picked it up a few weeks ago for a change of pace. On reading the blurb I almost put it back unread when I discovered that it was part of a series of books on key figures and periods in Christian History. But I thought, what the heck, and gave it a go.
It actually turned out to be a pretty good overview of the period known as the Age of Reason – which the author dates from 1648 – 1789. I had assumed, wrongly it turned out, that the book would be viewing the period from a Christian perspective. What it actually did, which (as far as I can recall) none of my previous history books on the period have done, is to weave religious happenings into the otherwise secular story of that period. Some of the names I recognised: Luther, Calvin and so on… Many, though, I did not. I did however recognise most of the Enlightenment scholars mentioned. Some of the streams of Christianity I recognised too – though again many I did not. What impressed me most about this little book is its even-handedness. I was expecting it to be either overly critical of Enlightenment advances in thought or overly sympathetic to the Christian responses but the author managed throughout to steer a middle course pointing out the strengths and the weaknesses of both sides. I actually learnt quite a lot about the period that is, from my reading to date, either ignored or side-lined. I certainly have a more rounded opinion of the period and I shall delve continue to into it in future. Overall this was a pretty good introduction to the intellectual life of a fascinating period in European history.
This little volume has been sitting on one of my shelf units for some time now. I picked it up a few weeks ago for a change of pace. On reading the blurb I almost put it back unread when I discovered that it was part of a series of books on key figures and periods in Christian History. But I thought, what the heck, and gave it a go.
It actually turned out to be a pretty good overview of the period known as the Age of Reason – which the author dates from 1648 – 1789. I had assumed, wrongly it turned out, that the book would be viewing the period from a Christian perspective. What it actually did, which (as far as I can recall) none of my previous history books on the period have done, is to weave religious happenings into the otherwise secular story of that period. Some of the names I recognised: Luther, Calvin and so on… Many, though, I did not. I did however recognise most of the Enlightenment scholars mentioned. Some of the streams of Christianity I recognised too – though again many I did not. What impressed me most about this little book is its even-handedness. I was expecting it to be either overly critical of Enlightenment advances in thought or overly sympathetic to the Christian responses but the author managed throughout to steer a middle course pointing out the strengths and the weaknesses of both sides. I actually learnt quite a lot about the period that is, from my reading to date, either ignored or side-lined. I certainly have a more rounded opinion of the period and I shall delve continue to into it in future. Overall this was a pretty good introduction to the intellectual life of a fascinating period in European history.
Wednesday, November 25, 2009
Tuesday, November 24, 2009
Monday, November 23, 2009
Thinking About: Beer
I’ve never been what you could call a big drinker. Even in my drinking years at University I could never really keep up with the big boys and, thankfully, quickly stopped trying to. By the end of my degree I could certainly ‘down a few’ without appreciable signs of wear and tear but, paradoxically, that was half the problem - actually getting off-my-face drunk was starting to cost a fortune. Strangely I never really liked drinking to excess that much. Partially because I just couldn’t see the point – oh, it was fun for a while but only for a short while – partially because I’ve never really liked pubs (they’re much better now after the nationwide smoking ban) and I’ve always hated throwing up. Added to that was the horrible realisation that as I got older – leaving University at 26 – my hangovers where getting progressively worse. Sticking mainly to vodka helped but still the day after the night before became an increasing write-off.
The opportunity to cut back arrived after graduation – followed by a period of unemployment. Being on the dole meant that I had a simple choice – eat or drink. I chose to eat. It quickly dawned on me that I actually didn’t miss the booze and quite happily cut back almost to nothing. Getting a job in London didn’t change that very much. I was living a 30 minute train journey away from where I worked so any drinking sessions with the guys after we clocked off were normally short-lived. When I moved here things changed a little bit. I had a few close friends in the city and they were fairly big drinkers – at least in their youth. So I had increasing opportunities to get back into bad habits. Admittedly my alcohol intake did increase but only ever episodically and I rarely got myself into hang-over territory. I discovered what my limit was and, through diligence and some practice, refined my drinking behaviour to a point where I could maintain a merry state without tipping over to being drunk and disorderly. For a while there if I wasn’t drinking shorts – vodka still being my favourite along with gin – I tended to drink Bud. It was light enough so I wouldn’t get drunk (or merry) too quickly and I didn’t spend half my night in the toilet. One night that all changed when I was re-introduced to real ale. I have never looked back.
My first introduction to proper beer was, of course, in my University years when one of the guys introduced me to the delightful Theakston’s Old Peculiar (or OP as it’s normally called). This lovely dark beer is a favourite memory of mine from that period. Needless to say I have started drinking it again down here. As my appreciation of ale grew I made a point of trying out the local ales wherever I went. My preference though was always for dark beer – the darker the better. Indeed one of the best pints I’ve had in recent memory was on a trip – my only so far – to the US. We were in San Francisco on the way back home from Australia and found ourselves in a micro-brewery run by the San Francisco Brewing Company. It was their 14th Anniversary so we felt that it would’ve been rude not to stop for one or two. We ended up staying for several hours and getting very drunk indeed. But it was beautiful beer and left me the next day without a trace of hang-over. It was a very pleasant way to end a great holiday.
Just a few weeks ago I discovered a new favourite ale called Old Tom (I actually clocked it because it had a picture of a cat on the front – sad I know) which turned out to be a lovely dark beer with a deceptive kick – which really shouldn’t have surprised me being 8.5% proof which is double the alcohol content of my other favourite dark beer – Guinness. Beer may not exactly be my life but I think it’s going to be a (small) part of my life at least into the near future. Apparently it’s good for my heart and anyway I like the taste.
I’ve never been what you could call a big drinker. Even in my drinking years at University I could never really keep up with the big boys and, thankfully, quickly stopped trying to. By the end of my degree I could certainly ‘down a few’ without appreciable signs of wear and tear but, paradoxically, that was half the problem - actually getting off-my-face drunk was starting to cost a fortune. Strangely I never really liked drinking to excess that much. Partially because I just couldn’t see the point – oh, it was fun for a while but only for a short while – partially because I’ve never really liked pubs (they’re much better now after the nationwide smoking ban) and I’ve always hated throwing up. Added to that was the horrible realisation that as I got older – leaving University at 26 – my hangovers where getting progressively worse. Sticking mainly to vodka helped but still the day after the night before became an increasing write-off.
The opportunity to cut back arrived after graduation – followed by a period of unemployment. Being on the dole meant that I had a simple choice – eat or drink. I chose to eat. It quickly dawned on me that I actually didn’t miss the booze and quite happily cut back almost to nothing. Getting a job in London didn’t change that very much. I was living a 30 minute train journey away from where I worked so any drinking sessions with the guys after we clocked off were normally short-lived. When I moved here things changed a little bit. I had a few close friends in the city and they were fairly big drinkers – at least in their youth. So I had increasing opportunities to get back into bad habits. Admittedly my alcohol intake did increase but only ever episodically and I rarely got myself into hang-over territory. I discovered what my limit was and, through diligence and some practice, refined my drinking behaviour to a point where I could maintain a merry state without tipping over to being drunk and disorderly. For a while there if I wasn’t drinking shorts – vodka still being my favourite along with gin – I tended to drink Bud. It was light enough so I wouldn’t get drunk (or merry) too quickly and I didn’t spend half my night in the toilet. One night that all changed when I was re-introduced to real ale. I have never looked back.
My first introduction to proper beer was, of course, in my University years when one of the guys introduced me to the delightful Theakston’s Old Peculiar (or OP as it’s normally called). This lovely dark beer is a favourite memory of mine from that period. Needless to say I have started drinking it again down here. As my appreciation of ale grew I made a point of trying out the local ales wherever I went. My preference though was always for dark beer – the darker the better. Indeed one of the best pints I’ve had in recent memory was on a trip – my only so far – to the US. We were in San Francisco on the way back home from Australia and found ourselves in a micro-brewery run by the San Francisco Brewing Company. It was their 14th Anniversary so we felt that it would’ve been rude not to stop for one or two. We ended up staying for several hours and getting very drunk indeed. But it was beautiful beer and left me the next day without a trace of hang-over. It was a very pleasant way to end a great holiday.
Just a few weeks ago I discovered a new favourite ale called Old Tom (I actually clocked it because it had a picture of a cat on the front – sad I know) which turned out to be a lovely dark beer with a deceptive kick – which really shouldn’t have surprised me being 8.5% proof which is double the alcohol content of my other favourite dark beer – Guinness. Beer may not exactly be my life but I think it’s going to be a (small) part of my life at least into the near future. Apparently it’s good for my heart and anyway I like the taste.
Saturday, November 21, 2009
Found: first amino acid on a comet
by Maggie McKee for New Scientist
17 August 2009
An amino acid has been found on a comet for the first time, a new analysis of samples from NASA's Stardust mission reveals. The discovery confirms that some of the building blocks of life were delivered to the early Earth from space. Amino acids are crucial to life because they form the basis of proteins, the molecules that run cells. The acids form when organic, carbon-containing compounds and water are zapped with a source of energy, such as photons – a process that can take place on Earth or in space.
Previously, researchers have found amino acids in space rocks that fell to Earth as meteorites, and tentative evidence for the compounds has been detected in interstellar space. Now, an amino acid called glycine has been definitively traced to an icy comet for the first time. "It's not necessarily surprising, but it's very satisfying to find it there because it hasn't been observed before," says Jamie Elsila of NASA's Goddard Space Flight Center, lead author of the new study. "It's been looked for [on comets] spectroscopically with telescopes but the content seems so low you can't see it that way."
Comets and asteroids are thought to have bombarded the Earth early in its history, and the new discovery suggests they carried amino acids with them. "We are interested in understanding what was on the early Earth when life got started," Elsila told New Scientist. "We don't know how life got started ... but this adds to our knowledge of the ingredient pool." Jonathan Lunine of the University of Arizona agrees. "Life had to get started with raw materials," he told New Scientist. "This provides another source [of those materials]." The amino acid was found in samples returned to Earth by NASA's Stardust mission, which flew by Comet Wild 2 in 2004 to capture particles shed by the 5-kilometre object.
The samples in Elsila's study came from four squares of aluminium foil, each about 1 centimetre across, that sat next to a lightweight sponge-like "aerogel" that was designed to capture dust from the comet's atmosphere, or coma.The researchers reported finding several amino acids, as well as nitrogen-containing organic compounds called amines, on the foil in 2008. But it was not clear whether the discoveries originated in the comet or whether they were simply contamination from Earth. The researchers spent two years trying to find out – a painstaking task since there was so little of the comet dust to study. In fact, there was not enough material to trace the source of any compound except for glycine, the simplest amino acid.
With only about 100 billionths of a gram of glycine to study, the researchers were able to measure the relative abundance of its carbon isotopes. It contained more carbon-13 than that found in glycine that forms on Earth, proving that Stardust's glycine originated in space. "It's a great piece of laboratory work," says Lunine. "It's probably something that couldn't have been done remotely with a robotic instrument – it points to the value of returning samples."
Elsila says she would like to see samples returned not just from a comet's coma but from its main body, or nucleus. "There might be more complex mixtures [of amino acids] and higher levels of them in a comet nucleus," she told New Scientist. Europe's Rosetta spacecraft should help shed light on the issue. The first mission designed to orbit and land on a comet's nucleus, it will reach the Comet 67P/Churyumov-Gerasimenko in 2014 after a 10-year journey from Earth.
[The evidence is increasing that life here began (at least partially) out there with the help of meteor and comet impacts bringing in fairly complex chemicals to add to the soup already bubbling in our oceans. If that is the case – as it appears to be – not only is life on Earth becoming more reasonable and more explainable, but it’s looking more likely that life exists elsewhere wherever the conditions allow. It’s not a matter of if we find life on other worlds but it’s a matter of when.]
by Maggie McKee for New Scientist
17 August 2009
An amino acid has been found on a comet for the first time, a new analysis of samples from NASA's Stardust mission reveals. The discovery confirms that some of the building blocks of life were delivered to the early Earth from space. Amino acids are crucial to life because they form the basis of proteins, the molecules that run cells. The acids form when organic, carbon-containing compounds and water are zapped with a source of energy, such as photons – a process that can take place on Earth or in space.
Previously, researchers have found amino acids in space rocks that fell to Earth as meteorites, and tentative evidence for the compounds has been detected in interstellar space. Now, an amino acid called glycine has been definitively traced to an icy comet for the first time. "It's not necessarily surprising, but it's very satisfying to find it there because it hasn't been observed before," says Jamie Elsila of NASA's Goddard Space Flight Center, lead author of the new study. "It's been looked for [on comets] spectroscopically with telescopes but the content seems so low you can't see it that way."
Comets and asteroids are thought to have bombarded the Earth early in its history, and the new discovery suggests they carried amino acids with them. "We are interested in understanding what was on the early Earth when life got started," Elsila told New Scientist. "We don't know how life got started ... but this adds to our knowledge of the ingredient pool." Jonathan Lunine of the University of Arizona agrees. "Life had to get started with raw materials," he told New Scientist. "This provides another source [of those materials]." The amino acid was found in samples returned to Earth by NASA's Stardust mission, which flew by Comet Wild 2 in 2004 to capture particles shed by the 5-kilometre object.
The samples in Elsila's study came from four squares of aluminium foil, each about 1 centimetre across, that sat next to a lightweight sponge-like "aerogel" that was designed to capture dust from the comet's atmosphere, or coma.The researchers reported finding several amino acids, as well as nitrogen-containing organic compounds called amines, on the foil in 2008. But it was not clear whether the discoveries originated in the comet or whether they were simply contamination from Earth. The researchers spent two years trying to find out – a painstaking task since there was so little of the comet dust to study. In fact, there was not enough material to trace the source of any compound except for glycine, the simplest amino acid.
With only about 100 billionths of a gram of glycine to study, the researchers were able to measure the relative abundance of its carbon isotopes. It contained more carbon-13 than that found in glycine that forms on Earth, proving that Stardust's glycine originated in space. "It's a great piece of laboratory work," says Lunine. "It's probably something that couldn't have been done remotely with a robotic instrument – it points to the value of returning samples."
Elsila says she would like to see samples returned not just from a comet's coma but from its main body, or nucleus. "There might be more complex mixtures [of amino acids] and higher levels of them in a comet nucleus," she told New Scientist. Europe's Rosetta spacecraft should help shed light on the issue. The first mission designed to orbit and land on a comet's nucleus, it will reach the Comet 67P/Churyumov-Gerasimenko in 2014 after a 10-year journey from Earth.
[The evidence is increasing that life here began (at least partially) out there with the help of meteor and comet impacts bringing in fairly complex chemicals to add to the soup already bubbling in our oceans. If that is the case – as it appears to be – not only is life on Earth becoming more reasonable and more explainable, but it’s looking more likely that life exists elsewhere wherever the conditions allow. It’s not a matter of if we find life on other worlds but it’s a matter of when.]
Friday, November 20, 2009
Just Finished Reading: Hegel – A Very Short Introduction by Peter Singer
GWF Hegel was undoubtedly one of the most important European philosophers of the 18th Century and had a huge influence on the ideas on the 19th and 20th Centuries particularly through the works of Karl Marx. His influence probably stemmed from his strong belief – hardly questioned at the time – that history itself operated with a purpose to ultimately produce the perfect society and the perfect people to live in it. He proposed that few men are truly free because they do not understand the world or themselves sufficiently and are, therefore, victims of strong emotion and avoidable ignorance. Hegel proposed that each human mind is but a small piece of universal mind which strives through history to understand itself. It is this mind, this spirit, that drives history forward. The universal mind is central to Hegel’s thinking and much of his philosophy flows from it.
Singer has managed to produce, in a scant 113 pages, a decent overview of one of the most influential – and to be honest most opaque – philosophers of recent times. I’ve come across some of his ideas before but have tended to shy away from them appreciating how difficult he can be to understand. Whilst not exactly fear free I am, at least, more open to ‘having a go’ at Hegel in the future. I think he’s quite important to get a handle on given his influence on both Mark and Nietzsche. It might indeed be argued that without at least an appreciation of Hegel it is difficult to truly understand the modern world. That being said you should expect to hear more about him – if not actual books by him – in the future. A recommended book for those who have thought about investigating Hegel but were unsure how to start.
GWF Hegel was undoubtedly one of the most important European philosophers of the 18th Century and had a huge influence on the ideas on the 19th and 20th Centuries particularly through the works of Karl Marx. His influence probably stemmed from his strong belief – hardly questioned at the time – that history itself operated with a purpose to ultimately produce the perfect society and the perfect people to live in it. He proposed that few men are truly free because they do not understand the world or themselves sufficiently and are, therefore, victims of strong emotion and avoidable ignorance. Hegel proposed that each human mind is but a small piece of universal mind which strives through history to understand itself. It is this mind, this spirit, that drives history forward. The universal mind is central to Hegel’s thinking and much of his philosophy flows from it.
Singer has managed to produce, in a scant 113 pages, a decent overview of one of the most influential – and to be honest most opaque – philosophers of recent times. I’ve come across some of his ideas before but have tended to shy away from them appreciating how difficult he can be to understand. Whilst not exactly fear free I am, at least, more open to ‘having a go’ at Hegel in the future. I think he’s quite important to get a handle on given his influence on both Mark and Nietzsche. It might indeed be argued that without at least an appreciation of Hegel it is difficult to truly understand the modern world. That being said you should expect to hear more about him – if not actual books by him – in the future. A recommended book for those who have thought about investigating Hegel but were unsure how to start.
Thursday, November 19, 2009
Wednesday, November 18, 2009
Tuesday, November 17, 2009
Monday, November 16, 2009
Letter to a ‘German Friend’.
You never believed in the meaning of this world, and you therefore deduced the idea that everything was equivalent and that good and evil could be defined according to one’s wishes. You supposed that in the absence of any human or divine code the only values were those of the animal world – in other words, violence and cunning. Hence you concluded that man was negligible and that his soul could be killed, that in the maddest of histories the only pursuit for the individual was the adventure of power and his only morality, the realism of conquests. And, to tell the truth, I, believing I thought as you did, saw no valid argument to answer you except a fierce love of justice which, after all, seemed to me as unreasonable as the most sudden passion.
Where lay the difference? Simply that you readily accepted despair and I never yielded to it. Simply that you saw the injustice of our condition to the point of being willing to add to it, whereas it seemed to me that man must exalt justice in order to fight against eternal injustice, create happiness in order to protest against the universe of unhappiness. Because you turned your despair into intoxication, because you freed yourself from it by making a principle of it, you were willing to destroy man’s works and fight him in order to add to his basic misery. Meanwhile, refusing to accept that despair and that tortured world, I merely wanted men to rediscover their solidarity in order to wage war against their revolting fate.
I continue to believe that this world has no ultimate meaning. But I know that something in it has a meaning and that is man, because he is the only creature to insist on having one. This world has at least the truth of man, and our task is to provide its justification against fate itself. And if it has no justification but man; hence he must be saved if we want to save the idea we have of life. With your scornful smile you will ask me: what do you mean by saving man? And with all my being I shout to you that I mean not mutilating him and yet giving a chance to the justice that man alone can conceive.
Albert Camus, Paris, July 1944.
You never believed in the meaning of this world, and you therefore deduced the idea that everything was equivalent and that good and evil could be defined according to one’s wishes. You supposed that in the absence of any human or divine code the only values were those of the animal world – in other words, violence and cunning. Hence you concluded that man was negligible and that his soul could be killed, that in the maddest of histories the only pursuit for the individual was the adventure of power and his only morality, the realism of conquests. And, to tell the truth, I, believing I thought as you did, saw no valid argument to answer you except a fierce love of justice which, after all, seemed to me as unreasonable as the most sudden passion.
Where lay the difference? Simply that you readily accepted despair and I never yielded to it. Simply that you saw the injustice of our condition to the point of being willing to add to it, whereas it seemed to me that man must exalt justice in order to fight against eternal injustice, create happiness in order to protest against the universe of unhappiness. Because you turned your despair into intoxication, because you freed yourself from it by making a principle of it, you were willing to destroy man’s works and fight him in order to add to his basic misery. Meanwhile, refusing to accept that despair and that tortured world, I merely wanted men to rediscover their solidarity in order to wage war against their revolting fate.
I continue to believe that this world has no ultimate meaning. But I know that something in it has a meaning and that is man, because he is the only creature to insist on having one. This world has at least the truth of man, and our task is to provide its justification against fate itself. And if it has no justification but man; hence he must be saved if we want to save the idea we have of life. With your scornful smile you will ask me: what do you mean by saving man? And with all my being I shout to you that I mean not mutilating him and yet giving a chance to the justice that man alone can conceive.
Albert Camus, Paris, July 1944.
Sunday, November 15, 2009
Saturday, November 14, 2009
Smart machines: What's the worst that could happen?
by MacGregor Campbell for New Scientist
27 July 2009
An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions. Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.
Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware. The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.
Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour. At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?
These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July. Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years. Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot-savants" systems good at tasks in a very narrow range of application, such as mathematics.
The panel discussed at length the idea of an AI "singularity" – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich. A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates. Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cell phone and I sure as hell haven't verified all of them," he says. "These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.
Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell, co-chair of the panel. "We're starting to think about it." At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.
[Well, at least they’re starting to think about the implications of AI. That’s a hopeful sign]
by MacGregor Campbell for New Scientist
27 July 2009
An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions. Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.
Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware. The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.
Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour. At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?
These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July. Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years. Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot-savants" systems good at tasks in a very narrow range of application, such as mathematics.
The panel discussed at length the idea of an AI "singularity" – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich. A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates. Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cell phone and I sure as hell haven't verified all of them," he says. "These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.
Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell, co-chair of the panel. "We're starting to think about it." At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.
[Well, at least they’re starting to think about the implications of AI. That’s a hopeful sign]
Friday, November 13, 2009
Thursday, November 12, 2009
Just Finished Reading: The Girl with the Long Green Heart by Lawrence Block
Johnny Hayden in an ex-grifter, an ex-conman. He’s also an ex-con having served two years in San Quentin. He now works as the night manager in a bowling alley and dreams of owning his own restaurant. Enter ex-partner Doug Rance with the perfect scheme. He is planning to fleece big time real estate entrepreneur Wallace Gunderman for $100,000 – more than enough for Johnny to buy his dream. The plan is perfect and cannot fail because Doug has someone on the inside – Gunderman’s long suffering girlfriend who wants to hurt him bad. Against his better judgement Johnny agrees to run one last con. But it’s not long before he realises that there are two cons running and that he might be on the sticky end of at least one of them.
After quite a gap I had decided to give the Hard Case Crime series another shot. After reading three books in this series so far I must admit that I haven’t been particularly impressed. This book is one of the best so far and has, at least partially, renewed my faith in things. It was well written, well paced and certainly kept me guessing almost to the end. I liked the ending too – nothing too dramatic or too flat. Overall the characterisation was pretty good as was the scam itself and the whole feel of things. Not exactly top class literature – not that I was really expecting anything of that quality – but good, solid, page turning stuff and whilst I didn’t exactly become attached to the characters as I sometimes do in novels I was interested enough in how things were panning out that I kept on turning those pages. Reasonable.
Johnny Hayden in an ex-grifter, an ex-conman. He’s also an ex-con having served two years in San Quentin. He now works as the night manager in a bowling alley and dreams of owning his own restaurant. Enter ex-partner Doug Rance with the perfect scheme. He is planning to fleece big time real estate entrepreneur Wallace Gunderman for $100,000 – more than enough for Johnny to buy his dream. The plan is perfect and cannot fail because Doug has someone on the inside – Gunderman’s long suffering girlfriend who wants to hurt him bad. Against his better judgement Johnny agrees to run one last con. But it’s not long before he realises that there are two cons running and that he might be on the sticky end of at least one of them.
After quite a gap I had decided to give the Hard Case Crime series another shot. After reading three books in this series so far I must admit that I haven’t been particularly impressed. This book is one of the best so far and has, at least partially, renewed my faith in things. It was well written, well paced and certainly kept me guessing almost to the end. I liked the ending too – nothing too dramatic or too flat. Overall the characterisation was pretty good as was the scam itself and the whole feel of things. Not exactly top class literature – not that I was really expecting anything of that quality – but good, solid, page turning stuff and whilst I didn’t exactly become attached to the characters as I sometimes do in novels I was interested enough in how things were panning out that I kept on turning those pages. Reasonable.
Wednesday, November 11, 2009
Tuesday, November 10, 2009
Monday, November 09, 2009
Just Finished Reading: Politics of Fear – Beyond Left and Right by Frank Furedi
I’ve just ‘rediscovered’ Frank Furedi several years after reading his short book Where have all the Intellectuals gone? About the rise of 21st Century Philistinism. In this book – and in some of his other works that I have acquired recently – he addresses the failures of modern politics and, in particular, the increasing use of fear by politicians on both sides to manipulate their populations.
It is actually quite difficult to summarise such a densely argued book and still do it justice. However, I’ll give it a shot. Furedi argues that both the Left and the Right have lost touch – actually abandoned – what makes their particular ideological stands so distinct from each other. Indeed, he argues, they have largely abandoned ideology all together. This I definitely agree with at least on this side of the Atlantic. Both sides have attempted to dominate the so-called middle ground and it is becoming increasingly difficult to differentiate between Left policies and Right policies. This is a consequence, Fruedi puts forward, of the cutting loose of the Right wing roots in the past and tradition and the Left wings dismissal of a utopian future. Both political wings are consequently now almost totally focused on the eternal present. In order to motivate people to align themselves with non-ideological proposals both sides (now barely distinguishable) use fear to persuade people to vote their way.
Unfortunately as politicians become more interchangeable and as their policies, which hardly warrant that name any more, become more focused on the here-and-now, people rapidly lose interest in the whole democratic process and simply decide to stay away from the polling stations come election time. In response to this politicians increase the fear factor and attempt to involve people – whilst at the same time distancing them – on single issues rather than fostering an involvement in politics itself. With the rising use of focus groups and other faux democratic processes individuals previously recognised as citizens or even voters are now seen as consumers of political ideas tailored to particular problems. With the resulting lack of power even more voters turn their back on the whole process. Voters are increasingly being treated like children and this on-going process further alienates people from democracy. I remember vividly some years ago when the Conservatives failed to win a General Election that they blatantly blamed the electorate for being too stupid to understand their platform of ideas. This is hardly the way to garner votes I thought.
Furedi proposes that the way out of this mess is the re-humanisation of humanism in such a way that we stop seeing ourselves as, and stop accepted the label of, being vulnerable creatures who exist merely at the whim of fate or circumstances far beyond our control. We need to see ourselves as capable of autonomous action and self-determination. We need to see that there are indeed alternatives and to reject the present malaise caused by both a fear of the future and a disconnection with the past. In order to move beyond the eternal present we must understand our history and have the strength to actively choose our future.
I was very impressed by this short volume, as you might be able to tell, and have already bought a further two of his works. Furedi seems to have a valuable insight into the stagnant politics of the 21st Century and I’m looking forward to having a better understanding his ideas. Highly recommended to anyone with a political bent.
Saturday, November 07, 2009
My Favourite Movies: Ferris Bueller’s Day Off
I have been a huge John Hughes fan since my 20’s and love pretty much everything he has produced. But one of my particular favourites has to be Ferris Bueller. Made 26 years ago it still manages to make me laugh out loud after more than 20 viewings. I just watched it again this afternoon and probably enjoyed it almost as much as the first time I saw it. Starring the very talented Matthew Broderick as the eponymous Bueller and the beautiful Mia Sara as his girlfriend this is the story of Ferris taking a day off High School (‘sagging’ or ‘bunking’ as we called it) to spend it with his friends in Chicago. What little tension there is in this movie – after all it is a teen comedy – comes from the attempts of the school Principal to catch him in the act and his sister, played by Jennifer Grey (one year away from Dirty Dancing), who wants to get revenge on her brother who seems to get away with everything.
Of course the fun part of the film revolves around the friend’s adventures in the big city, where the ‘borrowed’ Ferrari is taken for a ride by the parking attendants, where Ferris bluffs his way into a posh restaurant, and takes over a float (pictured above) in the German American parade through the city. It’s often very silly (with the occasional lapse into being outrageous) but is generally good clean innocent fun. Broderick makes it his film – as you would expect – and carries it really well. Everyone else is basically there as a prop for him and, for someone so young, he does a very good job of making the audience believe that he lives a charmed life. Mia Sara is somewhat underutilised as his eye-candy girlfriend but does manage to round out her character with some good one-liners. His best friend Cameron, played by Alan Ruck, is understated but funny, learning at last to stand up to his domineering father, just before ‘killing’ his favourite car. In some ways that was the whole point of the day - to teach Cameron to stand on his own feet. In many ways this is a typically formulaic Hughes film. But Hughes produced some of the best teen movies in the 80’s and even the worst of his films have flashes of brilliance. This, being far from his worst, has something to recommend almost every scene. A great film for a lazy wet afternoon. If you haven’t experience the Hughes effect then this is an excellent place to start.
I have been a huge John Hughes fan since my 20’s and love pretty much everything he has produced. But one of my particular favourites has to be Ferris Bueller. Made 26 years ago it still manages to make me laugh out loud after more than 20 viewings. I just watched it again this afternoon and probably enjoyed it almost as much as the first time I saw it. Starring the very talented Matthew Broderick as the eponymous Bueller and the beautiful Mia Sara as his girlfriend this is the story of Ferris taking a day off High School (‘sagging’ or ‘bunking’ as we called it) to spend it with his friends in Chicago. What little tension there is in this movie – after all it is a teen comedy – comes from the attempts of the school Principal to catch him in the act and his sister, played by Jennifer Grey (one year away from Dirty Dancing), who wants to get revenge on her brother who seems to get away with everything.
Of course the fun part of the film revolves around the friend’s adventures in the big city, where the ‘borrowed’ Ferrari is taken for a ride by the parking attendants, where Ferris bluffs his way into a posh restaurant, and takes over a float (pictured above) in the German American parade through the city. It’s often very silly (with the occasional lapse into being outrageous) but is generally good clean innocent fun. Broderick makes it his film – as you would expect – and carries it really well. Everyone else is basically there as a prop for him and, for someone so young, he does a very good job of making the audience believe that he lives a charmed life. Mia Sara is somewhat underutilised as his eye-candy girlfriend but does manage to round out her character with some good one-liners. His best friend Cameron, played by Alan Ruck, is understated but funny, learning at last to stand up to his domineering father, just before ‘killing’ his favourite car. In some ways that was the whole point of the day - to teach Cameron to stand on his own feet. In many ways this is a typically formulaic Hughes film. But Hughes produced some of the best teen movies in the 80’s and even the worst of his films have flashes of brilliance. This, being far from his worst, has something to recommend almost every scene. A great film for a lazy wet afternoon. If you haven’t experience the Hughes effect then this is an excellent place to start.
Friday, November 06, 2009
Thursday, November 05, 2009
Just Finished Reading: Black Steel by Steve Perry
Sleel is a matador – one of the galaxies elite bodyguards. Hired to protect an old friend from multiple assassination attempts he finally fails and is forced to watch his friend be beheaded by a master swordsman wielding a black sword. Saved from a suicidal attempt to regain his honour by Kee, sister of an ex-lover and fellow matador, he begins to regain his self respect as he learns to use a sword instead of his usual guns. Meanwhile Cierto, the man who killed his friend and almost killed him, has designs on Kee. He is looking for the mother of his unborn son and he has selected her for that role – whether she wants to or (preferably) not.
This was an unashamedly rip-roaring juvenile adventure novel with evil bad guys and damaged good guys. Here you will find few nuances or shades of grey. Here you will find simple but well plotted action scenes where good guys kill and bad guys fail – more often than not due to their own arrogance. Despite the fact that all of the main characters have sometimes extensive back stories that explain their actions (and inactions) there is little subtlety here. You get exactly what you would expect from this sort of book. It is however far from a by-the-numbers read and throw away piece of fluff. Perry has peppered this book (and others in this series) with a heavy dose of Eastern mysticism and samurai spirit – seen both from the dark side and the light. It did, at times, remind me of Star Wars in the way that The Force could be used for both good and evil. It’s that kind of dynamic. This is undoubtedly a fun and exciting read. It won’t tax your brain but it definitely won’t bore you. Recommended for a few days of pure entertainment.
Sleel is a matador – one of the galaxies elite bodyguards. Hired to protect an old friend from multiple assassination attempts he finally fails and is forced to watch his friend be beheaded by a master swordsman wielding a black sword. Saved from a suicidal attempt to regain his honour by Kee, sister of an ex-lover and fellow matador, he begins to regain his self respect as he learns to use a sword instead of his usual guns. Meanwhile Cierto, the man who killed his friend and almost killed him, has designs on Kee. He is looking for the mother of his unborn son and he has selected her for that role – whether she wants to or (preferably) not.
This was an unashamedly rip-roaring juvenile adventure novel with evil bad guys and damaged good guys. Here you will find few nuances or shades of grey. Here you will find simple but well plotted action scenes where good guys kill and bad guys fail – more often than not due to their own arrogance. Despite the fact that all of the main characters have sometimes extensive back stories that explain their actions (and inactions) there is little subtlety here. You get exactly what you would expect from this sort of book. It is however far from a by-the-numbers read and throw away piece of fluff. Perry has peppered this book (and others in this series) with a heavy dose of Eastern mysticism and samurai spirit – seen both from the dark side and the light. It did, at times, remind me of Star Wars in the way that The Force could be used for both good and evil. It’s that kind of dynamic. This is undoubtedly a fun and exciting read. It won’t tax your brain but it definitely won’t bore you. Recommended for a few days of pure entertainment.
Wednesday, November 04, 2009
Tuesday, November 03, 2009
Monday, November 02, 2009
Just Finished Reading: The Evils of Revolution by Edmund Burke
This book (another in the Penguin Great Ideas series) was actually extracts from Burke’s much larger work Reflections on the Revolution in France published in 1790. I think it might have been published in response to something Tomas Paine wrote or Paine’s book was in response to this. Either way I think they both produced some heated debate on the subject.
I had expected to find Burke’s views to be less than palatable as I understood him to be both very right-wing and deeply traditional. Surprisingly I found that he made a great deal of sense. Part of the problem I did have with these extracts (I intend to read the full work at some point) was the rather strange prose – and spelling – used in the late 18th Century. It certainly took a while to get used to though I couldn’t help thinking that at least some of the spelling was modernised for 21st century readers. A larger problem I found was that my knowledge of that era is somewhat limited. I certainly knew of some of the events mentioned in this volume – specifically the Glorious Revolution of 1688 – but almost nothing of the detail. My knowledge of the French Revolution is certainly much better but nothing like that of Burke’s who lived with it every day.
Burke’s critique of the Revolution in France is quite damming. Even though, as far as I know, the Terror had yet to take hold he clearly saw the way things were already going. Being a traditionalist his idea of societal change was gradual, orderly and British. Revolution, he saw, went against everything he knew to be true or respected. What is worse, he believed, it just didn’t work. Tearing a society down to its foundations in order to build a better one was clearly insane and could only lead to disaster. Much better he thought was to change things through evolution – though he wouldn’t have used that word probably – than revolution. One of the more interesting sections in this regard was his deep distrust of democracy as a reasonable way to run a country. He actually made a good case for restricting suffrage. All in all this was a very interesting slice both of 18th Century history and political philosophy. I shall see about acquiring the full work and look forward to his detailed analysis. Before that, however, it might be a good idea to bone up on the period a bit more some I don’t feel so lost next time.
This book (another in the Penguin Great Ideas series) was actually extracts from Burke’s much larger work Reflections on the Revolution in France published in 1790. I think it might have been published in response to something Tomas Paine wrote or Paine’s book was in response to this. Either way I think they both produced some heated debate on the subject.
I had expected to find Burke’s views to be less than palatable as I understood him to be both very right-wing and deeply traditional. Surprisingly I found that he made a great deal of sense. Part of the problem I did have with these extracts (I intend to read the full work at some point) was the rather strange prose – and spelling – used in the late 18th Century. It certainly took a while to get used to though I couldn’t help thinking that at least some of the spelling was modernised for 21st century readers. A larger problem I found was that my knowledge of that era is somewhat limited. I certainly knew of some of the events mentioned in this volume – specifically the Glorious Revolution of 1688 – but almost nothing of the detail. My knowledge of the French Revolution is certainly much better but nothing like that of Burke’s who lived with it every day.
Burke’s critique of the Revolution in France is quite damming. Even though, as far as I know, the Terror had yet to take hold he clearly saw the way things were already going. Being a traditionalist his idea of societal change was gradual, orderly and British. Revolution, he saw, went against everything he knew to be true or respected. What is worse, he believed, it just didn’t work. Tearing a society down to its foundations in order to build a better one was clearly insane and could only lead to disaster. Much better he thought was to change things through evolution – though he wouldn’t have used that word probably – than revolution. One of the more interesting sections in this regard was his deep distrust of democracy as a reasonable way to run a country. He actually made a good case for restricting suffrage. All in all this was a very interesting slice both of 18th Century history and political philosophy. I shall see about acquiring the full work and look forward to his detailed analysis. Before that, however, it might be a good idea to bone up on the period a bit more some I don’t feel so lost next time.
Subscribe to:
Posts (Atom)