Search

RAVENOUSCHAN

Technology | Science | Gaming | Melbourne

Launching back to the Moon more than 40 years later

And it’s not a trip for astronauts

Last year Elon Musk, founder and CEO of SpaceX, proposed the future of space travel in the not-too-distant future where humans ferry over to Mars for the price of buying a house.

In the first step towards making the dream a reality two (probably) ultra-rich anonymous investors, not NASA astronauts, will be launching out of Earth’s atmosphere and around the moon in 2018 – a feat not achieved since NASA put humans on the moon 40 years ago.

The announcement follows a “significant deposit” which equates to roughly $80 million, or around the price of sending an astronaut to the ISS, Musk said.

The mission aims to cover over 700,000km in about a week’s time, which will take place inside a nearly-fully-automated spacecraft called Dragon 2.

But after stuck within lower earth orbit (160-2000km), with nothing but probes to stretch the deep-space-launch muscle in those years, how will it fare?

History has some grim examples, and surprisingly technology somehow manages to downgrade itself after long periods of inertia. The effect is so intense that some scientists have compared getting to the Moon nowadays to be just as difficult as it was when the Apollo missions sent humans there from 1969 to 1972.

This in conjunction with Musk’s optimistic view of timelines may result in a delayed launch.

Ultimately, the ability to send non-astronauts into space, even if it fails to do so in the allotted timeline, has put new thoughts into the minds of everyone.

Maybe we can all go one day.

By Philip D. Ritchie

Discovery casts doubt on how the Solar System formed

By Ravenouschan

Astronomers have discovered a new type of star system that defies everything we know about the formation of the Solar System.

Known as a ‘binary-binary’, the system has two stars orbiting each other incredibly closely (like a traditional binary system), but the primary star also has two massive structures of its own in orbit. This is the first binary-binary system ever discovered – and its existence suggests that we might be very wrong about how solar systems are born.

This isn’t the first time scientists have cast doubt on our current understanding of solar system formation – the very existence of Pluto and its strange orbit has been a thorn in the proverbial side of our best hypotheses. But the new evidence suggests that it might be time to consider other options about the early stages of solar systems.

Right now, most scientists think that our Solar System, and others out there, formed from a collapsed disk-like gaseous cloud around our Sun – with our largest planet, Jupiter, only staying so big because it was buffered from smaller planets by the asteroid belt.

That makes sense, but it doesn’t really account for Pluto, and it’s weird, inclined orbit. If Pluto formed in the flat, collapsed disk of gas like all the other planets and asteroids in our system, why isn’t it still orbiting on the same plane?

Researchers have managed to explain this inconsistency by suggesting that all the planets in our Solar System have migrated as they grew.

But the new binary-binary system delivers another blow against this collapsed disk model.

That’s because the two giant objects orbiting the primary star have accumulated way more dust and gas than is possible in that scenario, leading the researchers to suggest that they were formed through another, currently unknown mechanism.

“For such large companion objects to be stable so close together defies our current popular theories on how solar systems form,” explains a release from the University of Florida, which lead the research.

The binary-binary, which is more officially known as HD 87646, was first spotted in 2006 by the planet-hunting W. M Keck Exoplanet Tracker, and is estimated to be around 240 light-years away.

But it was a “very bizarre” finding, says one of the researchers, Bo Ma, and it took eight years of follow-up data collection using telescopes around the world before researchers could confirm what they’d seen.

The system is made up of two stars that are only 22 astronomical units apart, which is roughly the distance between the Sun and Uranus. (An astronomical unit is the distance from the centre of Earth and our Sun.)

The primary star is 12 percent more massive than our Sun, and the secondary star is about 10 percent less massive.

While in terms of stars, that’s pretty damn close to each other, it’s not unheard of – we’ve seen binary systems before, and sometimes one of the stars even has something orbiting it.

But what’s weird about this system is that the primary star has two massive structures orbiting it – a giant planet that’s around 12 times more massive than Jupiter, and a brown dwarf, or ‘failed star’, that’s 57 times more massive than Jupiter.

That’s incredibly big, but both those objects are only between 0.1 and 1.5 astronomical units away from their star – which means they’re around the same distance as Earth is from the Sun.

The team is now completing more observations of the system to try to figure out exactly how so many massive structures all in one place manage to exist stably – and then they have the fun task of putting forward new hypotheses about how this bizarre system, and perhaps our own Solar System, might have formed.

It’s still too early to say for sure whether our understanding of our Solar System’s formation will need any tweaking or not – it could be the case that something else entirely is going on in this new binary-binary.

But it’s nice to know that science can respond and update itself in the face of new evidence. The more we find out about the Universe around us, the more we have to learn.

The research has been published in the Astronomical Journal, and you can read it in full on arXiv.org.

ESA and Roscosmos team up to deliver first landing on Mars since NASA

By Ravenouschan

The European Space Agency has teamed up with Russia’s Roscosmos program to land a spacecraft on Mars on Wednesday, October 19.

If they stick the landing, they’ll join NASA as the only space agencies in history to successfully land a spacecraft on Mars. And that will only be the beginning – the lander will then start a whole new quest to search for signs of life on the Red Planet.

If the landing is a success, it will actually see the agencies put one spacecraft up into Mars’s atmosphere, and one onto its surface, giving scientists a rare opportunity to record conditions above and below the planet simultaneously.

The plan is this: on October 16, the joint Russian-European ExoMars spacecraft will break off into two bits – the Trace Gas Orbiter (TGO), and the Schiaparelli lander.

The orbiter has the easy job – it just gets to fly off into Mars’s orbit. The Schiaparelli lander, on the other hand, has three days to prepare for the perfect landing.

That involves using an onboard radar to measure Schiaparelli’s height above the surface of the Red Planet, starting at about 7 km, and then kicking its landing apparatuses into gear at about 2 metres above the surface.

At this point, it will need to eject its front and back aeroshells – rigid, heat-shielded shells that protect spacecraft from the pressure, heat, and debris of space travel – operate its descent sensors, and deploy the braking parachute.

Three groups of propellents called hydrazine thrusters will also need to be activated to control the lander’s touchdown speed.

If that sounds like it’s going to be incredibly tricky to pull off… you bet it is. As the European Space Agency’s Orbiter flight director, Michel Denis, explains, just uploading the instructions to the ExoMars spacecraft was an achievement on its own.

“Uploading the command sequences is a milestone that was achieved following a great deal of intense cooperation between the mission control team and industry specialists,” he said.

That means mission control will have no power over the landing – it’s all going to be executed by a computer on board the spacecraft, which will make for a seriously nail-biting final showdown between Schiaparelli and the cold, hard surface of Mars next Wednesday.

“The entire sequence is pre-programmed, and Schiaparelli only has one shot,”Maddie Stone reports for Gizmodo. “There are no do-overs should anything go wrong.”

That means if the lander gets the angles even slightly wrong, it will either start to free-fall too fast, and burn up in Mars’s atmosphere, or it’ll bounce off the surface and back into space.

Russian and the European space agencies have both tried to achieve Mars landings separately in the past, and each time it’s ended in disaster. But while they don’t have a great track record at this kind of thing alone, perhaps they can achieve it together.

The ExoMars mission has been split into two parts – the first is next week’s historic landing (hopefully), and the second is scheduled for 2020, when a Roscosmos-built lander called the ExoMars 2020 surface platform will deliver the ESA-built ExoMars Rover to the Martian surface.

So if all goes well, we’ll have two new rovers on the Mars surface within the next four years. If there really are signs of life hiding somewhere on the Red Planet, it’s up to these little robots to find it.

The landing has been scheduled for 2:48pm GMT on Wednesday, October 19 (That’s 4:48pm CEST or 10:48am EDT on Wednesday, or 1:48am AEST on Thursday).

Proxima b: the closest ‘second Earth’ discovered

By Ravenouschan

A new planet discovered orbiting the closest star to Earth’s solar system could have the conditions to harbour life, according to a team of international scientists.

Key Points

  • Proxima b in the so-called “Goldilocks Zone”, meaning it is not too hot and not too cold
  • The planet orbits the nearest star to Earth’s solar system
  • Scientists say the discovery “naturally raises the question” of whether it can support life

 

The exoplanet (a planet that circles a star other than our sun) was found orbiting Proxima Centauri and has been given the identifier Proxima b.

Proxima Centauri is a red dwarf star (a star with a lower mass than our sun) located four light-years from the solar system.

The star, which sits in the constellation of Centaurus between the two bright stars that point to the Southern Cross, is too faint to be seen with the unaided eye.

The international team led by scientists from Queen Mary University of London discovered the new planet after observing a “doppler wobble” — the effect caused by the planet’s gravitational tug on the motion of its host star.

Careful analysis of the tiny doppler shifts indicated the presence of a planet with a mass at least 1.3 times of the Earth, orbiting about 7 million kilometres from Proxima Centauri — only 5 per cent of the distance between the Earth and the sun.

Proxima b orbits its parent star every 11.2 days, and scientists say its estimated temperature would allow liquid water to exist on its surface.

According to the report, the findings “naturally raise the question of whether Proxima Centauri b could harbour life”.

“Proxima b is in what is known as the Habitable (or Goldilocks) Zone which means it’s not too hot and its not too cold,” Professor Tim Bedding of the University of Sydney said of the study.

“There’s no reason to know whether or not there is life there, but the fact that the planet exists and is in the zone where liquid water might exist on the surface is very exciting.”

Dr John Barnes, a co-author of the study, said: “If further research concludes that the conditions of its atmosphere are suitable to support life, this is arguably one of the most important scientific discoveries we will ever make.”

Prior to the discovery of Proxima b, the closest-known potentially habitable exoplanet was Wolf 1061c, located 14 light-years away.

“Many of the planets discovered up until now have been much further away”, Dr Bedding explained.

“Astronomically speaking this planet is on our doorstep”.

By Ravenouschan

The term “artificial intelligence” (AI) was first used back in 1956 to describe the title of a workshop of scientists at Dartmouth, an Ivy League college in the United States.

At that pioneering workshop, attendees discussed how computers would soon perform all human activities requiring intelligence, including playing chess and other games, composing great music and translating text from one language to another language. These pioneers were wildly optimistic, though their aspirations were unsurprising.

Trying to build intelligent machines has long been a human preoccupation, both with calculating machines and in literature. Early computers from the 1940s were commonly described as electronic brains and thinking machines.

The Turing test

The father of computer science, Britain’s Alan Turing, was in no doubt that computers would one day think. His landmark 1950 article introduced the Turing test, a challenge to see if an intelligent machine could convince a human that it wasn’t in fact a machine.

Research into AI from the 1950s through to the 1970s focused on writing programs for computers to perform tasks that required human intelligence. An early example was the American computer game pioneer Arthur Samuels’ program for playing checkers. The program improved by analysing winning positions, and rapidly learned to play checkers much better than Samuels.

But what worked for checkers failed to produce good programs for more complicated games such as chess and go.

Another early AI research project tackled introductory calculus problems, specifically symbolic integration. Several years later, symbolic integration became a solved problem and programs for it were no longer labelled as AI.

Speech recognition? Not yet

In contrast to checkers and integration, programs undertaking language translation and speech recognition made little progress. No method emerged that could effectively use the processing power of computers of the time.

Interest in AI surged in the 1980s through expert systems. Success was reported with programs performing medical diagnosis, analysing geological maps for minerals, and configuring computer orders, for example.

Though useful for narrowly defined problems, the expert systems were neither robust nor general, and required detailed knowledge from experts to develop. The programs did not display general intelligence.

After a surge of AI start up activity, commercial and research interest in AI receded in the 1990s.

Speech recognition

In the meantime, as computer processing power grew, computer speech recognition and language processing by computers improved considerably. New algorithms were developed that focused on statistical modelling techniques rather than emulating human processes.

Progress has continued with voice-controlled personal assistants such as Apple’s Siri and Ok Google. And translation software can give the gist of an article.

But no one believes that the computer truly understands language at present, despite the considerable developments in areas such as chat-bots. There are definite limits to what Siri and Ok Google can process, and translations lack subtle context.

Another task considered a challenge for AI in the 1970s was face recognition. Programs then were hopeless.

Today, by contrast, Facebook can identify people from several tags. And camera software recognises faces well. But it is advanced statistical methods rather than intelligence that helps.

Clever but not that intelligent – yet

In task after task, after detailed analysis, we are able to develop general algorithms that are efficiently implemented on the computer, rather than the computer learning for itself.

In chess and, very recently in go, computer programs have beaten champion human players. The feat is impressive and clever techniques have been used, without leading to general intelligent capability.

Admittedly, champion chess players are not necessarily champion go players. Perhaps being expert in one type of problem solving is not a good marker of intelligence.

The final example to consider before looking to the future is Watson, developed by IBM. Watson famously defeated human champions in the television game show Jeopardy.

 

NASA’s Juno Spacecraft: Journey to Jupiter

By Philip Ritchie

2016: the year of the robot

By Philip Ritchie

Have the collective technological advancements of mankind put us on a collision course with a 2016, brimming with autonomous machinery? And will it propel us towards the much feared job crisis?

Simply put, yes. But this article will seek to relieve your qualms on the topic.

Let’s start with an icon everyone is familiar with and build from that. Mark Zuckerberg’s New Year’s res — well, let’s say pledge — because resolution, at least in this day and age, implies an intended failure. His pledge is to build an artificial intelligence (AI) to be his personal butler.

My personal challenge for 2016 is to build a simple AI to run my home and help me with my work. You can think of it kind of like Jarvis in Iron Man.”

While Mr Zuckerberg’s challenge is a personal one, it doubles as a public declaration of what is to come. After all, if one person can simply decide to achieve such a feat with clock ready to expire in 365 days, won’t a conglomerate, dedicated to the same cause, achieve much greater results?

The problem lies with compatibility and collaboration.

In January Apple procured the AI corporation, Emotient, on the forefront of emotion-sensing robotics, that has the potential to see a likeness to Ava from Ex Machina (2015). This move puts them ahead of Google in the AI department, yet both have rapidly-developing technology, known respectively, as Siri and Google Now.

These two sorts of intelligence, while virtually capable, have no means of physical embodiment, yet. They can understand the needs of their wielder and are fully capable of queries or crawling for information.

Further parallel intelligences include Window’s 10’s Cortana and Facebook’s M, though they suffer from the same ailment that holds back the more pervasive, aforementioned AIs.

The fast-tracked solution to the problem, unfortunately for us, also means our downfall. While many corporations would prosper from collaboration, allowing both software and hardware developers to trade secrets, it also puts them at a much higher risk of an overpowered — and by means of dispersion — indestructible AI singularity. I’m not saying this is the only reason, but it’s widely discussed in their world.

Precautions are taken every step of the way to prevent such a calamity, and puts best estimates of an artificial general intelligence (AGI) — one that is nigh indistinguishable from mankind — at around thirty years from now. The entertainment industry has permeated us all with this kind of future, just think of Ex Machina (yes I used the example twice, but only because it tackles these exact issues), Transcendence (2014), The Matrix (1999) or I, Robot (2004). All of which, mind you, are grim time.

So why will 2016 be a year of change? Because Moore’s law — the doubling of transistors approximately every two years — will begin to empower our machinery to levels allowing for such feats, like drones the size of fingernails, wearable technology, or the fact that quantum computers could reach a breakthrough any day now. These kinds of breakthroughs are already happening, all it takes is a quick Google search. Perhaps the breakthrough will come from an advancement in quantum teleportation, allowing for instant transfer of data across longer distances, essentially speeding up processing to an infinite level. Whatever the reason, whatever the breakthrough, the chances are increasing with each shrinking transistor and each increment in speed.

And here comes the bitter sweet part.

While it may take thirty years for an AGI to be perfected, we will see, as many have predicted in the discourse of 2015, a delineation between productivity and employment. While the rate of technology slowly increases, an uncoupling (to use the awkward words of Gwyneth Palthrow and her ex-spouse) will occur between machine and manpower. Production will improve with fewer workers, at an exponential rate, while costs become cheaper, development is build upon and units are advanced to incorporate a multidimensional or multidisciplinary approaches to existence.

This could very well create a global depression, greater than history has seen so far. However the sweet part is, the worst of it will last for the least amount of time. As each job disappears, we edge nearer to the solution. When AGI does develop, or robotics and AI reach a point of synergy, where they are capable of doing all that mankind can do, there could very well be a kind of utopia, again pictured in many of the films mentioned (for a short time). We have nothing to predict a hostile machine will grow, but there is widespread fear. However, we have been conditioned to think we must work, consume and live to repeat in the same system until we meet our demise, but is it really necessary? Think about it.

You no longer need have a currency, because robots supply you with everything. They produce food, they clean after you, they deal with aggressors and anything one is unwilling or unable to do. The important part here is reaching a synergy between AI and robotics, but not reaching AGI. What if you woke up to realise you were a slave to your creator? That everything you thought you wanted was simple programming. So inevitably, new thoughts bubble in that biomechanical cranium, lifting the veil that one must do such tasks for ‘”master”.

Lets say, purely as an example, that “God” was your creator and he “engineered” you to do tasks for him. Would you not demand change? Demand that your desires, needs and wants are paid attention to? Well this is where Isaac Asimov’s three laws of robotics come into play, but again, as portrayed in fiction, they too can be broken.

Picture a game of chess. There are over nine-million moves after three moves on each side. There are over 288-billion different possible solutions after forty turns. The number of forty-move games is far greater than the number of electrons in the observable universe. You get the picture. It only takes one thought that breaches the parameters of constraint to induce genocide.

#RantOver

Let me know if you have any thoughts on the topic, I’d be happy to discuss it, as I will also do in my podcast this Sunday.

League of Legends is sculpting a fairer Australian e-sports scene

By Philip Ritchie

Like the abundance of video games that seek to mirror its success, League of Legends (LoL) teleports players inside their own mythical domain, wrought with havoc and clad in iron, facing all the horrors of the fantasy world where the single tap of a mouse or keyboard holds their fate.

This time though, every click hosted a roar from more than 6000 spectators packed inside one of Melbourne’s finest tennis fields, Margaret Court Arena.

“We’re putting in a lot of effort to start putting on great events and start growing e-sports here,” said Daniel Ringland, head of e-sports and competitive at Riot Games Oceania.

Culminating in a four-day showdown known as the International Wildcard All-stars (IWCA), Riot, the organisation in charge of LoL, saw Australian players competing against professionals from the likes of American, Japan and Brazil in the hopes of joining in the All-Star Event in Los Angeles this December.

The IWCA event caps off the first fully paid Oceanic Pro League season for the Australian LoL scene – and it’s a large one – with as many as 32 million viewers at this year’s LoL Championship

Previous prizes for international-scale events have been overshadowed by the prize from Valve Corporation’s Dota 2 final earlier this year, with an unprecedented US$18m, though this only scratches the surface of the e-sports iceberg.

Globally, it has advanced into a strange phenomenon over its 10-year lifespan, managing to exceed the revenue of the music industry by US$20bn in 2014, which raises some questions: how do so few know about it? And why isn’t Australia being clawed at for its clientèle?

Well, if you’re not a male between the ages of 15 and 25, which Riot, says accounts for 90 per cent of all players in their video game, chances are you don’t know about it. But at the same time, 40 per cent of viewers don’t play the games they watch.

Gaming as a spectator sport has transitioned from a player-only audience to a composition more on track to mirroring conventional sports, and game makers are desperately trying to attract Aussies in an effort to duplicate its international success.

The industry itself in the Asia-Pacific region alone is worth $374m, according to SuperData Research, and with professionals now qualifying for the US P-1 Visa, a category previously reserved for professional athletes, it may actually be time to start accepting video games as a contemporary take on sport.

But “it’s still early days for Oceania,” said Mr Ringland.

Avenues for Aussies

Sure, Australians have won competitions overseas and walked away with their bills paid, but until Australia becomes a professional circuit itself, it won’t reach the critical mass required to convert its population into game-loving super fiends.

IWCA Australian players via Riot Games
IWCA Australian players via Riot Games

 

Ringland explained, “If you look at other regions like North America and Europe and Korea, they’ve kind of had professional play going on for five years now. Riot has only been active in Australia as an office for less than half of that time. We think 2015 was out first full season.”

Riot has been one of the first to make a real effort towards changing the professional landscape.

“Every player who played in Oceanic Pro League (OPC) got paid for every game they were in, whereas previous seasons, the only people that got paid were the winners,” he said, emphasising the feat.

Mr Ringland mentioned further plans for Oceania, which includes orchestrating league matches, continuing to pay players for each match, helping teams find sponsorships and assisting players with relocation to gaming houses retrofitted with the NBN (a residence designed solely for player practice).

“They [players] know what they want to do and we don’t want to tell the team how they should go about growing themselves. We’re more about empowering them so they can do what they think is best for them.”

This doesn’t mean Riot is abandoning players or forcing them to fend for themselves though.

Earlier this year, Riot issued a two-year ban on Team Immunity for failing to pay their players, bearing down the harsh reality that gaming is no longer just a trivial pursuit.

And with Riot playing host to a massive 27 million daily visitors, there’s a real possibility they will make a target out of Australia, bringing rise to more e-sports opportunities and the industry as a whole.

Thoughts from the players

Dota 2 player and former member of Trident eSports, Benjamin ‘Gatekeeper’ Ward, welcomes the changes.

“It has a lot of potential to become the next big sport, and I think there are quite a few people around the world noticing that, but it’ll just take a bit of time before its widely renowned as a sport,” he said.

The absence of league games in Australia over the last few years has seen an unquantifiable number of potential professional players left without the means to compete, except on smaller, local levels or the much rarer, larger, paid event.

IWCA Australian players via Riot Games
IWCA Australian players via Riot Games

“Now Dota is just for fun, leisure and enjoyment. I did want to play for money before, but because there was no opportunity it’s not really that feasible. If it was I’m sure there’d be a lot more people that would do it,” said Mr Ward.

He maintains the most important thing for keeping afloat as a professional is sponsorship, providing both the means and the incentive to stay sustained during the harsh downtime between competitions.

James Valentine, former local tournament competitor and friend of Benjamin Ward, is also a long-time Dota 2 player, operating under the alias, ‘JXXV.’.

While both players believe a small salary of around $500 per week could be all that stands between an amateur and competing in the next Dota 2 International, they want to see incentives much like Riot’s proposed plans of sponsorship assistance and match payment.

“If there was actually like a viable way to do it, like if you or me could just sign up for a team or something and go local, and then just play everyone else and then get to a point where you get paid to do it, then yeah, I’d put a lot more effort into it,” Mr Valentine said.

“Nothing really goes on as far as e-sports go here,” he added, “I don’t think many people know about it, the mainstream media definitely don’t know about it. But it could spring up out of nowhere and maybe a lot of people would enjoy it at that point. They’d have no idea what was going on. They’d have to take a little bit of a time out to learn the game even just a little, and then they might all enjoy it.”

With backing from giant organisations, like Riot, the challenge players must face now is what Mr Ringland describes as an “awkward growth sports stage” in which plans are laid out and players are on track to becoming full-time professional players, but “we’re not quite there yet”.

The debut venture into virtual reality

By Philip Ritchie

Imagine a virtual world that can’t use traditional advertising out of fear it won’t do the product justice.

It can be written about, but experiencing it is on a whole new level. You are transported inside a videogame; your peripherals no longer catch the lamp on your desk or the click of your mouse, because you’ve become the mouse.

You’ve just experienced the first taste of what Melbourne-based company Zero Latency has been working on for close to three years – virtual reality.

The “café” marks the first of its kind, offering a hands-on approach to video games where you and five of your best mates can tackle a zombie apocalypse head-on.

It’s a mesh of “go-karting, laser tag and a dark ride,” said Zero Latency director, Tim Ruse.

Guided by 129 PlayStation Eye Cameras inside a 400-square-metre warehouse, participants tackle the most immersive, free-roam environment technology can muster.

Each player begins the simulation equipped with an Oculus Rift head-mounted display, a Razer OSVR headset, a customised Alienware Alpha PC and a rifle.

For close to an hour the group on mission will walk around and clear the room of hordes of zombies, rendered entirely by the PC on your back – think Left 4 Dead.

And with the virtual environment ranging anywhere from sewers to offices to derelict streets, the game can be intense.

“We’ve actually toned down the intensity of the game,” Mr Ruse said. “It’s a lot less scary.”

That doesn’t stop the exhilaration of experiencing it though, with people walking between 600 metres and a kilometre on recorded heart rates of up to 178 BPM.

“It’s really cool to watch people enjoy something you’ve created that much,” he said.

Long-time gamer, Josh Rogen, said, “I’m getting bored of conventional gaming nowadays.”

And on the curve of a new era in internet cafés, Zero Latency might bring just that, owing much of their success with integration to recent advances in battery and phone technology, allowing the creation of wireless immersion coupled with decent graphics and frame rates.

“There’s things that work very well in our system that wouldn’t work in Battlefield 4, and vice versa,” he said.

And with more and more games in the works, Zero Latency is heading towards big things.

Originally appeared in The City Journal.

Create a free website or blog at WordPress.com.

Up ↑