IELTS Reading Recent Actual Test 01
READING PASSAGE 1
You
should spend about 20 minutes on Questions 1-13 which are based on Reading
Passage 1
The Concept of Childhood in Western Countries
The
history of childhood has been a heated topic in social history since the highly
influential book Centuries of Childhood’, written by French historian Philippe
Aries, emerged in 1960. He claimed that ‘childhood’ is a concept created by
modern society.
Whether
childhood is itself a recent invention has been one of the most intensely
debated issues in the history of childhood. Historian Philippe Aries asserted
that children were regarded as miniature adults, with all the intellect and
personality that this implies, in Western Europe during the Middle Ages (up to
about the end of the 15th century). After scrutinising medieval pictures and
diaries, he concluded that there was no distinction between children and adults
for they shared similar leisure activities and work; However, this does not
mean children were neglected, forsaken or despised, he argued. The idea of
childhood corresponds to awareness about the peculiar nature of childhood,
which distinguishes the child from adult, even the young adult. Therefore, the
concept of childhood is not to be confused with affection for children.
Traditionally,
children played a functional role in contributing to the family income in the
history. Under this circumstance, children were considered to be useful. Back
in the Middle Ages, children of 5 or 6 years old did necessary chores for their
parents. During the 16th century, children of 9 or 10 years old were often
encouraged or even forced to leave their family to work as servants for
wealthier families or apprentices for a trade.
In the
18th and 19th centuries, industrialisation created a new demand for child
labour; thus many children were forced to work for a long time in mines,
workshops and factories. The issue of whether long hours of labouring would
interfere with children’s growing bodies began to perplex social reformers.
Some of them started to realise the potential of systematic studies to monitor
how far these early deprivations might be influencing
children’s development.
The
concerns of reformers gradually had some impact upon the working condition of
children. For example, in Britain, the Factory Act of 1833 signified the
emergence of legal protection of children from exploitation and was also
associated with the rise of schools for factory children. Due partly to factory
reform, the worst forms of child exploitation were eliminated gradually. The
influence of trade unions and economic changes also contributed to the
evolution by leaving some forms of child labour redundant during the 19th
century. Initiating children into work as ‘useful’ children was no longer a
priority, and childhood was deemed to be a time for play and education for all
children instead of a privileged minority. Childhood was increasingly
understood as a more extended phase of dependency, development and learning
with the delay of the age for starting full-time work- Even so, work continued
to play a significant, if less essential, role in children’s lives in the later
19th and 20th centuries. Finally, the ‘useful child’ has become a controversial
concept during the first decade of the 21st century, especially in the context
of global concern about large numbers of children engaged in child labour.
The
half-time schools established upon the Factory Act of 1833 allowed children to
work and attend school. However, a significant proportion of children never
attended school in the 1840s, and even if they did, they dropped out by the age
of 10 or 11. By the end of the 19th century in Britain, the situation changed
dramatically, and schools became the core to the concept of a ‘normal’
childhood.
It is
no longer a privilege for children to attend school and all children are
expected to spend a significant part of their day in a classroom. Once in
school, children’s lives could be separated from domestic life and the adult
world of work. In this way, school turns into an institution dedicated to
shaping the minds, behaviour and morals of the young. Besides, education
dominated the management of children’s waking hours through the hours spent in
the classroom, homework (the growth of ‘after school’ activities), and the
importance attached to parental involvement.
Industrialisation,
urbanisation and mass schooling pose new challenges for those who are
responsible for protecting children’s welfare, as well as promoting their
learning. An increasing number of children are being treated as a group with
unique needs, and are organised into groups in the light of their age. For
instance, teachers need to know some information about what to expect of
children in their classrooms, what kinds of instruction are appropriate for
different age groups, and what is the best way to assess children’s progress.
Also, they want tools enabling them to sort and select children according to
their abilities and potential.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-27 which are based on Reading Passage 2
The Study of Chimpanzee Culture
A.
After studying the similarities between
chimpanzees and humans for years, researchers have recognised these
resemblances run much deeper than anyone first thought in the latest decade.
For instance, the nut cracking observed in the Tai Forest is not a simple
chimpanzee behaviour, but a separate adaptation found only in that particular
part of Africa, as well as a trait which is considered to be an expression of
chimpanzee culture by biologists. These researchers frequently quote the word
‘culture’ to describe elementary animal behaviours, like the regional dialects
of different species of songbirds, but it turns out that the rich and varied
cultural traditions chimpanzees enjoyed rank secondly in complexity only to
human traditions.
B.
During the past two years, the major
research group which studies chimpanzees collaborated unprecedentedly and
documented some distinct cultural patterns, ranging from animals’ use of tools
to their forms of communication and social customs. This emerging picture of
chimpanzees affects how human beings ponder upon these amazing creatures. Also,
it alters our conception of human uniqueness and shows us the extraordinary
ability of our ancient ancestors to create cultures.
C.
Although we know that Homo sapiens and Pan
Troglodytes have coexisted for hundreds of millennia and their genetic
similarities surpass 98 per cent, we still knew next to nothing about
chimpanzee behaviour in the wild until 40 years ago. All this began to change
in the 1960s when Toshisada Nishida of Kyoto University in Japan and renowned
British primatologist Jane Goodall launched their studies of wild chimpanzees
at two field sites in Tanzania. (Goodall’s research station at Gombe—the first
of its kind—is more famous, but Nishida’s site at Mahale is the second oldest
chimpanzee research site in the world.)
D.
During these primary studies, as the
chimpanzees became more and more accustomed to close observation, the
remarkable discoveries emerged. Researchers witnessed a variety of unexpected
behaviours, ranging from fashioning and using tools, hunting, meat eating, food
sharing to lethal fights between members of neighbouring communities.
E.
In 1973, 13 forms of tool use and 8 social
activities which appeared to differ between the Gombe chimpanzees and
chimpanzee species elsewhere were recorded by Goodall. She speculated that some
variations shared what she referred to as a ‘cultural origin’. But what exactly
did Goodall mean by ‘culture’? According to the Oxford Encyclopedic English
Dictionary, culture is defined as ‘the customs. . .and achievements of a
particular time or people.’ The diversity of human cultures extends from
technological variations to marriage rituals, from culinary habits to myths and
legends. Of course, animals do not have myths and legends, but they do share
the capacity to pass on behavioural traits from one generation to another, not
through their genes but via learning. From biologists’ view, this is the
fundamental criterion for a cultural trait—something can be learnt by observing
the established skills of others and then passed on to following generations.
F.
What are the implications for chimpanzees
themselves? We must place a high value upon the tragic loss of chimpanzees, who
are decimated just when finally we are coming to appreciate these astonishing
animals more completely. The population of chimpanzees has plummeted and
continued to fall due to illegal trapping, logging and, most recently, the
bushmeat trade within the past century. The latter is particularly alarming
because logging has driven roadways, which are now used to ship wild animal
meat—including chimpanzee meat to consumers as far afield as Europe, into
forests. Such destruction threatens not only the animals themselves but also a
host of fascinatingly different ape cultures.
G.
However, the cultural richness of the ape
may contribute to its salvation. For example, the conservation efforts have
already altered the attitudes of some local people. After several organisations
showed videotapes illustrating the cognitive prowess of chimpanzees, one
Zairian viewer was heard to exclaim, ‘Ah, this ape is so like me, I can no
longer eat him.’
H.
How did an international team of
chimpanzee experts perform the most comprehensive survey of the animals ever
attempted? Although scientists have been delving into chimpanzee culture for
several decades, sometimes their studies contained a fatal defect. So far, most
attempts to document cultural diversity among chimpanzees have solely relied
upon officially published accounts of the behaviours reported at each research
site. But this approach probably neglects a good deal of cultural variation for
three reasons.
I.
First, scientists normally don’t publish
an extensive list of all the activities they do not see at a particular
location. Yet this is the very information we need to know—which behaviours
were and were not observed at each site. Second, there are many reports
describing chimpanzee behaviours without expressing how common they are;
without this information, we can’t determine whether a particular action was a
transient phenomenon or a routine event that should be considered part of its
culture. Finally, researchers’ description of potentially significant
chimpanzee behaviours often lacks sufficient detail, which makes it difficult
for scientists from other spots to report the presence or absence of the
activities.
J.
To tackle these problems, my colleague and
I determined to take a new approach. We asked field researchers at each site to
list all the behaviours which they suspected were local traditions. With this
information, we assembled a comprehensive list of 65 candidates for cultural
behaviours.
K.
Then we distributed our list to team
leaders at each site. They consulted with their colleagues and classified each
behaviour regarding its occurrence or absence in the chimpanzee community. The
major brackets contained customary behaviour (occurs in most or all of the
able-bodied members of at least one age or sex class, such as all adult males),
habitual (less common than customary but occurs repeatedly in several
individuals), present (observed at the site but not habitual), absent (never
seen), and unknown.
READING PASSAGE 3
You should spend about 20 minutes on Questions
28-40 which are based on Reading Passage 3 below.
Texting the Television
A
Once
upon a time, if a television show with any self-respect wanted to target a
young audience, it needed to have an e-mail address. However, in Europe’s TV
shows, such addresses are gradually substituted by telephone numbers so that
audiences can text the show from their mobile phones. Therefore, it comes as no
shock that according to Gartner’s research, texting has recently surpassed
Internet usage across Europe. Besides, among the many uses of text messaging,
one of the fastest-growing uses is to interact with television. The statistics
provided by Gartner can display that 20% of French teenagers, 11% in Britain
and 9% in Germany have responded to TV programmes by sending a text message.
B
This
phenomenon can be largely attributed to the rapid growth of reality TV shows
such as ‘Big Brother’, where viewers get to decide the result through voting.
The majority of reality shows are now open to text-message voting, and in some
shows like the latest series of Norway’s ‘Big Brother’, most votes are
collected in this manner. But TV-texting isn’t just about voting. News shows
encourage viewers to, comment by texting messages; game shows enable the
audience to be part of the competition; music shows answer requests by taking
text messages; and broadcasters set up on-screen chatrooms. TV audiences tend
to sit on the sofa with their mobile phones right by their sides, and ‘it’s a
supernatural way to interact.’ says Adam Daum of Gartner.
C
Mobile
service providers charge appreciable rates for messages to certain numbers,
which is why TV-texting can bring in a lot of cash. Take the latest British
series of ‘Big Brother’ as an example. It brought about 5.4m text-message votes
and £1.35m ($2,1m) of profit. In Germany, MTV’s ‘Videoclash’ encourages the
audience to vote for one of two rival videos, and induces up to 40,000 texts
per hour, and each one of those texts costs €0.30 ($0.29), according to a
consultancy based in Amsterdam. The Belgian quiz show ‘1 Against 100’ had an
eight-round texting match on the side, which brought in 110,000 participants in
one month, and each of them paid €0.50 for each question. In Spain, a
cryptic-crossword clue invites the audience to send their answers through text
at the expense of €1, so that they can be enrolled in the poll to win a €300
prize. Normally, 6,000 viewers would participate within one day.
At the moment, TV-related text messaging takes up a considerable proportion of mobile service providers’ data revenues. In July, Mm02 (a British operator) reported an unexpectedly satisfactory result, which could be attributed to the massive text waves created by ‘Big Brother’. Providers usually own 40%-50% of the profits from each text, and the rest is divided among the broadcaster, the programme producer and the company which supplies the message-processing technology. So far, revenues generated from text messages have been an indispensable part of the business model for various shows. Obviously, there has been grumbling that the providers take too much of the share. Endemol, the Netherlands-based production firm that is responsible for many reality TV, shows including ‘Big Brother’, has begun constructing its own database for mobile-phone users. It plans to set up a direct billing system with the users and bypass the providers.
D
How
come the joining forces of television and text message turn out to be this
successful? One crucial aspect is the emergence of one-of-a-kind four-, five-
or six-digit numbers known as ‘short codes’. Every provider has control over
its own short codes, but not until recently have they come to realise that it
would make much more sense to work together to offer short codes compatible
with all networks. The emergence of this universal short codes was a
game-changer, because short codes are much easier to remember on the screen,
according to Lars Becker of Flytxt, a mobile-marketing company.
E
Operators’
co-operation on enlarging the market is by a larger trend, observes Katrina
Bond of Analysys, a consultancy. When challenged by the dilemma between holding
on tight to their margins and permitting the emergence of a new medium, no
provider has ever chosen the latter WAP, a technology for mobile-phone users to
read cut-down web pages on their screens, failed because of service providers’
reluctance towards revenue sharing with content providers. Now that they’ve
learnt their lesson, they are altering the way of operating. Orange, a French
operator, has come such a long way as to launch a rate card for sharing revenue
of text messages, a new level of transparency that used to be unimaginable.
F
At a
recent conference, Han Weegink of CMG, a company that offers the television
market text-message infrastructure, pointed out that the television industry is
changing in a subtle yet fundamental way. Instead of the traditional one-way
presentation, more and more TV shows are now getting viewers’ reactions
involved.
Certainly,
engaging the audiences more has always been the promise of interactive TV. An
interactive TV was originally designed to work with exquisite set-top devices,
which could be directly plugged into the TV. However, as Mr Daum points out,
that method was flawed in many ways. Developing and testing software for
multiple and incompatible types of set-top box could be costly, not to mention
that the 40% (or lower) market penetration is below that of mobile phones
(around 85%). What’s more, it’s quicker to develop and set up apps for mobile
phones. ‘You can approach the market quicker, and you don’t have to go through
as many greedy middlemen,’ Mr Daum says. Providers of set-top box technology
are now adding texting function to the design of their products.
G
The
triumph of TV-related texting reminds everyone in the business of how easily a
fancy technology can all of a sudden be replaced by a less complicated,
lower-tech method. That being said, the old-fashioned approach to interactive
TV is not necessarily over; at least it proves that strong demands for
interactive services still exist. It appears that the viewers would sincerely
like to do more than simply staring at the TV screen. After all, couch potatoes
would love some thumb exercises.
IELTS Reading Recent Actual Test 02
READING PASSAGE 1
You
should spend about 20 minutes on Questions 1-14 which are based on Reading
Passage 1 below.
Timekeeper: Invention of Marine Chronometer
A.
Up to the middle of the 18th century, the
navigators were still unable to exactly identify the position at sea, so they
might face a great number of risks such as the shipwreck or running out of
supplies before arriving at the destination. Knowing one’s position on the
earth requires two simple but essential coordinates, one of which is the
longitude.
B.
The longitude is a term that can be used
to measure the distance that one has covered from one’s home to another place
around the world without the limitations of naturally occurring baseline like
the equator. To determine longitude, navigators had no choice but to measure
the angle with the naval sextant between Moon centre and a specific star— lunar
distance—along with the height of both heavenly bodies. Together with the
nautical almanac, Greenwich Mean Time (GMT) was determined, which could be
adopted to calculate longitude because one hour in GMT means 15-degree
longitude. Unfortunately, this approach laid great reliance on the weather conditions,
which brought great inconvenience to the crew members. Therefore, another
method was proposed, that is, the time difference between the home time and the
local time served for the measurement. Theoretically, knowing the longitude
position was quite simple, even for the people in the middle of the sea with no
land in sight. The key element for calculating the distance travelled was to
know, at the very moment, the accurate home time. But the greatest problem is:
how can a sailor know the home time at sea?
C.
The simple and again obvious answer is
that one takes an accurate clock with him, which he sets to the home time
before leaving. A comparison with the local time (easily identified by checking
the position of the Sun) would indicate the time difference between the home
time and the local time, and thus the distance from home was obtained. The
truth was that nobody in the 18th century had ever managed to create a clock
that could endure the violent shaking of a ship and the fluctuating temperature
while still maintaining the accuracy of time for navigation.
D. After 1714, as an attempt to find a solution to the problem, the British government offered a tremendous amount of £20,000, which were to be managed by the magnificently named ‘Board of Longitude’. If timekeeper was the answer (and there could be other proposed solutions, since the money wasn’t only offered for timekeeper), then the error of the required timekeeping for achieving this goal needed to be within 2.8 seconds a day, which was considered impossible for any clock or watch at sea, even when they were in their finest conditions.
E.
This award, worth about £2 million today,
inspired the self-taught Yorkshire carpenter John Harrison to attempt a design
for a practical marine clock. In the later stage of his early career, he worked
alongside his younger brother James. The first big project of theirs was to
build a turret clock for the stables at Brockelsby Park, which was
revolutionary because it required no lubrication. Harrison designed a marine clock
in 1730, and he travelled to London in seek of financial aid. He explained his
ideas to Edmond Halley, the Astronomer Royal, who then introduced him to George
Graham, Britain’s first-class clockmaker. Graham provided him with financial
aid for his early-stage work on sea clocks. It took Harrison five years to
build Harrison Number One or HI. Later, he sought the improvement from
alternate design and produced H4 with the giant clock appearance. Remarkable as
it was, the Board of Longitude wouldn’t grant him the prize for some time until
it was adequately satisfied.
F.
Harrison had a principal contestant for
the tempting prize at that time, an English mathematician called John Hadley,
who developed sextant. The sextant is the tool that people adopt to measure
angles, such as the one between the Sun and the horizon, for a calculation of
the location of ships or planes. In addition, his invention is significant
since it can help determine longitude.
G. Most chronometer forerunners of that particular generation were English, but that doesn’t mean every achievement was made by them. One wonderful figure in the history is the Lancastrian Thomas Earnshaw, who created the ultimate form of chronometer escapement—the spring detent escapement—and made the final decision on format and productions system for the marine chronometer, which turns it into a genuine modem commercial product, as well as a safe and pragmatic way of navigation at sea over the next century and half.
READING PASSAGE 2
You should spend about 20 minutes on Questions
15-27 which are based on Reading Passage 2 below.
Ancient People in Sahara
On Oct.
13, 2000, Paul Sereno, a professor from the University of Chicago, guided a
team of palaeontologists to climb out of three broken Land Rovers, contented
their water bottles and walked across the toffee-coloured desert called Tenere
Desert. Tenere, one of the most barren areas on the Earth, is located on the
southern flank of Sahara. According to the turbaned nomads Tuareg who have
ruled this infertile domain for a few centuries, this California-size ocean of
sand and rock is a ‘desert within a desert’. In the Tenere Desert, massive
dunes might stretch a hundred miles, as far as the eyes can reach. In addition,
120-degree heat waves and inexorable winds can take almost all the water from a
human body in less than a day.
Mike
Hettwer, a photographer in the team, was attracted by the amazing scenes and
walked to several dunes to take photos of the amazing landscape. When reaching
the first slope of the dune, he was shocked by the fact that the dunes were
scattered with many bones. He photographed these bones with his digital camera
and went to the Land Rover in a hurry. ‘I found some bones,’ Hettwer said to
other group members, ‘to my great surprise, they do not belong to the
dinosaurs. They are human bones.’
One day
in the spring of 2005, Paul Sereno got in touch with Elena Garcea, a
prestigious archaeologist at the University of Cassino in Italy, asking her to
return to the site with him together. After spending 30 years in researching
the history of Nile in Sudan and of the mountains in the Libyan Desert, Garcea
got well acquainted with the life of the ancient people in Sahara. But she did
not know Sereno before this exploration, whose claim of having found so many
skeletons in Tenere desert was unreliable to some archaeologists, among whom
one person considered Sereno just as a ‘moonlighting palaeontologist’. However,
Garcea was so obsessive with his perspective as to accept his invitation
willingly.
In the
following three weeks, Sereno and Garcea (along with five excavators, five
Tuareg guides, and five soldiers from Niger’s army) sketched a detailed map of
the destined site, which was dubbed Gobero after the Tuareg name for the area,
a place the ancient Kiffian and Tuareg nomads used to roam. After that, they
excavated eight tombs and found twenty pieces of artefacts for the above
mentioned two civilisations. From these artefacts, it is evidently seen that
Kiffian fishermen caught not only the small fish, but also some huge ones: the
remains of Nile perch, a fierce fish weighing about 300 pounds, along with
those of the alligators and hippos, were left in the vicinity of dunes.
Sereno
went back with some essential bones and artefacts, and planned for the next trip
to the Sahara area. Meanwhile, he pulled out the teeth of skeletons carefully
and sent them to a researching laboratory for radiocarbon dating. The results
indicated that while the smaller ‘sleeping’ bones might date back to 6,000
years ago (well within the Tenerian period), the bigger compactly tied
artefacts were approximately 9,000 years old, just in the heyday of Kiffian
era. The scientists now can distinguish one culture from the other.
In the
fall of 2006, for the purpose of exhuming another 80 burials, these people had
another trip to Gobero, taking more crew members and six extra scientists
specialising in different areas. Even at the site, Chris Stojanowski,
bio-archaeologist in Arizona State University, found some clues by matching the
pieces. Judged from the bones, the Kiffian could be a people of peace and
hardworking. ‘No injuries in heads or forearms indicate that they did not fight
too much,’ he said. ‘And they had strong bodies.’ He pointed at a long narrow
femur and continued, ‘From this muscle attachment, we could infer the huge leg
muscles, which means this individual lived a strenuous lifestyle and ate much
protein. Both of these two inferences coincide with the lifestyle of the people
living on fishing.’ To create a striking contrast, he displayed a femur of a
Tenerian male. This ridge was scarcely seen. ‘This individual had a less
laborious lifestyle, which you might expect of the herder.’
Stojanowski concluded that the Tenerian were herders, which was consistent with the other scholars’ dominant view of the lifestyle in Sahara area 6,000 years ago, when the dry climate favoured herding rather than hunting. But Sereno proposed some confusing points: if the Tenerian was herders, where were the herds? Despite thousands of animal bones excavated in Gobero, only three cow skeletons were found, and none of goats or sheep found. ‘It is common for the herding people not to kill the cattle, particularly in a cemetery.’ Elena Garcea remarked, ‘Even the modem pastoralists such as Niger’s Wodaabe are reluctant to slaughter the animals in their herd.’ Sereno suggested, ‘Perhaps the Tenerian in Gobero were a transitional group that had still relied greatly on hunting and fishing and not adopted herding completely.’
READING PASSAGE 3
You should spend about 20 minutes on Questions
28-40 which are based on Reading Passage 3
Quantitative Research in Education
Many education researchers used to work on the assumption that children experience different phases of development, and that they cannot execute the most advanced level of cognitive operation until they have reached the most advanced forms of cognitive process. For example, one researcher Piaget had a well-known experiment in which he asked the children to compare the amount of liquid in containers with different shapes. Those containers had the same capacity, but even when the young children were demonstrated that the same amount of fluid could be poured between the containers, many of them still believed one was larger than the other. Piaget concluded that the children were incapable of performing the logical task in figuring out that the two containers were the same size even though they had different shapes, because their cognitive development had not reached the necessary phase. Critics on his work, such as Donaldson, have questioned this interpretation. They point out the possibility that the children were just unwilling to play the experimenter’s game, or that they did not quite understand the question asked by the experimenter. These criticisms surely do state the facts, but more importantly, it suggests that experiments are social situations where interpersonal interactions take place. The implication here is that Piaget’s investigation and his attempts to replicate it are not solely about measuring the children’s capabilities of logical thinking, but also the degree to which they could understand the directions for them, their willingness to comply with these requirements, how well the experimenters did in communicating the requirements and in motivating those children, etc.
The
same kinds of criticisms have been targeted to psychological and educational
tests. For instance, Mehan argues that the subjects might interpret the test
questions in a way different from that meant by the experimenter. In a language
development test, researchers show children a picture of a medieval fortress,
complete with moat, drawbridge, parapets and three initial consonants in it: D,
C, and G. The children are required to circle the correct initial consonant for
‘castle’. The answer is C, but many kids choose D. When asked what the name of
the building was, the children responded ‘Disneyland’. They adopted the
reasoning line expected by the experimenter but got to the wrong substantive
answer. The score sheet with the wrong answers does not include in it a child’s
lack of reasoning capacity; it only records that the children gave a different
answer rather than the one the tester expected.
Here we
are constantly getting questions about how valid the measures are where the
findings of the quantitative research are usually based. Some scholars such as
Donaldson consider these as technical issues, which can be resolved through
more rigorous experimentation. In contrast, others like Mehan reckon that the
problems are not merely with particular experiments or tests, but they might
legitimately jeopardise the validity of all researches of this type.
Meanwhile,
there are also questions regarding the assumption in the logic of quantitative
educational research that causes can be identified through physical and/or
statistical manipulation of the variables. Critics argue that this does not take
into consideration the nature of human social life by assuming it to be made up
of static, mechanical causal relationships, while in reality, it includes
complicated procedures of interpretation and negotiation, which do not come
with determinate results. From this perspective, it is not clear that we can
understand the pattern and mechanism behind people’s behaviours simply in terms
of the casual relationships, which are the focuses of quantitative research. It
is implied that social life is much more contextually variable and complex.
Such
criticisms of quantitative educational research have also inspired more and
more educational researchers to adopt qualitative methodologies during the last
three or four decades. These researchers have steered away from measuring and
manipulating variables experimentally or statistically. There are many forms of
qualitative research, which is loosely illustrated by terms like ‘ethnography’,
‘case study’, ‘participant observation’, ‘life history’, ‘unstructured interviewing’,
‘discourse analysis’ and so on. Generally speaking, though, it has
characteristics as follows:
Qualitative
researches have an intensive focus on exploring the nature of certain phenomena
in the field of education, instead of setting out to test hypotheses about
them. It also inclines to deal with ‘unstructured data’, which refers to the
kind of data that have not been coded during the collection process regarding a
closed set of analytical categories. As a result, when engaging in observation,
qualitative researchers use audio or video devices to record what happens or
write in detail open-ended field-notes, instead of coding behaviour concerning
a pre-determined set of categories, which is what quantitative researchers
typically would do when conducting ‘systematic observation’. Similarly, in an
interview, interviewers will ask open-ended questions instead of ones that
require specific predefined answers of the kind typical, like in a postal
questionnaire. Actually, qualitative interviews are often designed to resemble
casual conversations.
The primary forms of data analysis include verbal description and explanations and involve explicit interpretations of both the meanings and functions of human behaviours. At most, quantification and statistical analysis only play a subordinate role. The sociology of education and evaluation studies were the two areas of educational research where-criticism of quantitative research and the development of qualitative methodologies initially emerged in the most intense way. A series of studies conducted by Lacey, Hargreaves and Lambert in a boys’ grammar school, a boys’ secondary modem school, and a girls’ grammar school in Britain in the 1960s marked the beginning of the trend towards qualitative research in the sociology of education. Researchers employed an ethnographic or participant observation approach, although they did also collect some quantitative data, for instance on friendship patterns among the students. These researchers observed lessons, interviewed both the teachers and the students, and made the most of school records. They studied the schools for a considerable amount of time and spent plenty of months gathering data and tracking changes over all these years.
IELTS Reading Recent Actual Test 03
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1 below.
The Innovation of Grocery Stores
A
At the
very beginning of the 20th century, the American grocery stores offered
comprehensive services: the customers would ask help from the people behind the
counters (called clerks) for the items they liked, and then the clerks would
wrap the items up. For the purpose of saving time, customers had to ask
delivery boys or go in person to send the lists of what they intended to buy to
the stores in advance and then went to pay for the goods later. Generally
speaking, these grocery stores sold only one brand for each item. Such early
chain stores as A&P stores, although containing full services, were very
time-consuming and inefficient for the purchase.
B
Bom in
Virginia, Clarence Saunders left school at the age of 14 in 1895 to work first
as a clerk in a grocery store. During his working in the store, he found that
it was very inefficient for people to buy things there. Without the assistance
of computers at that time, shopping was performed in a quite backward way.
Having noticed that this inconvenient shopping mode could lead to tremendous
consumption of time and money, Saunders, with great enthusiasm and innovation,
proposed an unprecedented solution—let the consumers do self-service in the
process of shopping—which might bring a thorough revolution to the whole
industry.
C
In 1902, Saunders moved to Memphis to put his perspective into practice, that is, to establish a grocery wholesale cooperative. In his newly designed grocery store, he divided the store into three different areas: ‘A front lobby’ served as an entrance, an exit, as well as the checkouts at the front. ‘A sales department’ was deliberately designed to allow customers to wander around the aisle and select their needed groceries. In this way, the clerks would not do the unnecessary work but arrange more delicate aisle and shelves to display the goods and enable the customers to browse through all the items. In the gallery above the sales department, supervisors can monitor the customers without disturbing them. ‘Stockroom’, where large fridges were placed to maintain fresh products, is another section of his grocery store only for the staff to enter. Also, this new shopping design and layout could accommodate more customers to go shopping simultaneously and even lead to some unimaginable phenomena: impulse buying and later supermarket.
D
On
September 6, 1916, Saunders performed the self-service revolution in the USA by
opening the first Piggly Wiggly featured by the turnstile at the entrance store
at 79 Jefferson Street in Memphis, Tennessee. Quite distinct from those in
other grocery stores, customers in Piggly Wiggly chose the goods on the shelves
and paid the items all by themselves. Inside the Piggly Wiggly, shoppers were
not at the mercy of staff. They were free to roam the store, check out the
products and get what they needed by their own hands. There, the items were
clearly priced, and no one forced customers to buy the things they did not
need. As a matter of fact, the biggest benefit that the Piggly Wiggly brought
to customers was the money-saving effect. Self-service was optimistic for the
improvement. ‘It is good for both the consumer and retailer because it cuts
costs,’ noted George T. Haley, a professor at the University of New Haven and
director of the Centre for International Industry Competitiveness, ‘if you look
at the way in which grocery stores (previous to Piggly Wiggly and Alpha Beta)
were operated, what you can find is that there are a great number of workers
involved, and labour is a major expense.’ Fortunately, the chain stores such as
Piggly Wiggly cut the fat.
E
Piggly
Wiggly and this kind of self-service stores soared at that time. In the first
year, Saunders opened nine branches in Memphis. Meanwhile, Saunders immediately
applied a patent for the self-service concept and began franchising Piggly
Wiggly stores. Thanks to the employment of self-service and franchising, the
number of Piggly Wiggly had increased to nearly 1,300 by 1923. Piggly Wiggly
sold $100 million (worth $1.3 billion today) in groceries, which made it the
third-biggest grocery retailer in the nation. After that, this chain store
experienced company listing on the New York Stock Exchange, with the stocks
doubling from late 1922 to March 1923. Saunders contributed significantly to
the perfect design and layout of grocery stores. In order to keep the flow rate
smooth, Saunders even invented the turnstile to replace the common entrance
mode.
F
Clarence Saunders died in 1953, leaving abundant legacies mainly symbolised by Piggly Wiggly, the pattern of which spread extensively and lasted permanently.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-26 which are based on Reading Passage 2 below.
Bestcom—Considerate Computing
‘Your
battery is now fully charged,’ announced the laptop to its owner Donald A.
Norman in a synthetic voice, with great enthusiasm and maybe even a hint of
pride. For the record, humans are not at all unfamiliar with distractions and
multitasking. ‘We are used to a complex life that gets constantly interrupted
by computer’s attention-seeking requests, as much as we are familiar with
procreation,’ laughs Ted Selker of the Massachusetts Institute of Technology
(MIT) Media Lab,
Humanity has been connected to
approximately three billion networked telephones, computers, traffic lights and
even fridges and picture frames since these things can facilitate our daily
lives. That is why we do not typically turn off the phones, shut down the
e-mail system, or close the office door even when we have a meeting coming or a
stretch of concentrated work. We merely endure the consequences.
Countless research reports have confirmed that if people are
unexpectedly interrupted, they may suffer a drop in work efficiency, and they
are more likely to make mistakes. According to Robert G. Picard from the
University of Missouri, it appears to build up the feeling of frustration cumulatively,
and that stress response makes it difficult to focus again. It is. not solely
about productivity and the pace of life. For some professionals like pilots,
drivers, soldiers and doctors, loss of focus can be downright disastrous. ‘If
we could find a way to make our computers and phones realise the limits of
human attention and memory, they may come off as more thoughtful and
courteous,’ says Eric Horvitz of Microsoft Research. Horvitz, Selker and Picard
are just a few of a small but prospering group of researchers who are
attempting to make computers, phones, cars and other devices to function more
like considerate colleagues instead of egocentric oafs.
To do this, the machines need new skills of three kinds: sensing, reasoning and communicating. First, a system must: sense or infer where its owner is and what he or she is doing. Next, it must weigh the value of the messages it wants to convey against the cost of the disruption. Then it has to. choose the best mode and time to interject: Each of these pushes the limits of computer science and raises issues of privacy, complexity or reliability. Nevertheless, ‘Attentive’ Computing Systems, have started to make an appearance in the latest Volvos, and IBM has designed and developed a communications software called WebSphere that comes with an underlying sense of busyness. Microsoft has been conducting extensive in-house tests of a way more sophisticated system since 2003. In a couple of years, companies might manage to provide each office employee with a software version of the personal receptionist which is only available to corner-suite executives today.
However,
the truth is that most people are not as busy as they claim to be, which
explains why we can often stand interruptions from our inconsiderate electronic
paraphernalia. To find out the extent to which such disruption may claim
people’s daily time, an IBM Research team led by Jennifer Lai from Carnegie
Mellon University studied ten managers, researchers and interns at the
workplace. They had the subjects on videotape, and within every period of a
specific time, they asked the subjects to evaluate their ‘interruptibility’.
The time a worker spent in leave-me-alone state varied from individual to
individual and day to day, and the percentage ranged from 10 to 51. Generally,
the employees wished to work without interruption for roughly 1/3 of the time.
Similarly, by studying Microsoft workers, Horvitz also came to the discovery
that they ordinarily spend over 65 per cent of their day in a low-attention
mode.
Obviously,
today’s phones and computers are probably correct about two-thirds of time by
assuming that their users are always available to answer a call, check an
email, or click the ‘OK’ button on an alert box. But for the considerate
systems to be functional and useful, their accuracy has to be above 65 in
sending when their users are about to reach their cognitive limit.
Inspired
by Horvitz’s work, Microsoft prototype Bestcom-Enhanced Telephony (Bestcom-ET)
digs a bit deeper into every user’s computer to find out clues about what they
are dealing with. As I said earlier, Microsoft launched an internal beta test
of the system in mid-2003. Horvitz points out that by the end of last October,
nearly 3,800 people had been relying on the system to field their incoming
calls.
Horvitz is, in fact, a tester himself, and as we have our conversation in his office, Bestcom silently takes care of all the calls. Firstly, it checks if the caller is in his address book, the company directory, or the ‘recent call’ list. After triangulating all these resources at the same time, it attempts to figure out what their relationship is. The calls that get through are from family, supervisors and people he called earlier that day. Other callers will get a message on their screens that say he cannot answer now because he is in a meeting, and will not be available until 3pm. The system will scan both Horvitz’s and the caller’s calendar to check if it can reschedule a callback at a time which works for both of them. Some callers will take that option, while others simply leave a voicemail. The same happens with e-mails. When Horvitz is not in his office, Bestcom automatically offers to transfer selected callers to his cellphone, unless his calendar implies that he is in a meeting.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3 below.
The Olympic Torch
Since
776 B.C., when the Greek people held their first-ever Olympic Games, the Games
were hosted every four years at the Olympia city. Back then, a long journey for
the Olympic torch was made before the opening ceremony of each Olympic Games.
The Greek people would light a cauldron of flames on the altar, a ritual
devoted to Hera, the Greek Goddess of birth and marriage.
The
reintroduction of flame to the Olympics occurred at the Amsterdam 1928 Games,
for which a cauldron was lit yet without a torch relay. The 1936 Berlin Summer
Games held the first Olympic torch relay, which was not resumed in the Winter
Olympics until in 1952. However, in that year the torch was lit not in Olympia,
Greece, but in Norway, which was considered as the birthplace of skiing. Until
the Innsbruck 1964 Winter Olympics in Austria, the Olympic flame was reignited
at Olympia.
The
torch is originally an abstract concept of a designer or groups of designers. A
couple of design groups hand in their drafts to the Olympic Committee in the
hope that they would get the chance to create the torch. The group that wins
the competition will come up with a design for a torch that has both aesthetic
and practical value. After the torch is completed, it has to succeed in going
through all sorts of severe weather conditions. The appearance of the modem
Olympic torch is attributed to a Disney artist John Hench, who designed the
torch for the 1960 Winter Olympics in Squaw Valley, California. His design laid
a solid foundation for all the torches in the future.
The
long trip to the Olympic area is not completed by one single torch, but by
thousands of them, so the torch has to be replicated many times. Approximately
10,000 to 15,000 torches are built to fit thousands of runners who take the
torches through every section of the Olympic relay. Every single runner can
choose to buy his or her torch as a treasurable souvenir when he or she
finishes his or her part of the relay.
The
first torch in the modem Olympics (the 1936 Berlin Games) was made from a
slender steel rod with a circular platform at the top and a circular hole in
the middle to jet flames.
The
name of the runner was also inscribed on the platform as a token of thanks. In
the earlier days, torches used everything from gunpowder to olive oil as fuels.
Some torches adopted a combination of hexamine and naphthalene with a flammable
fluid. However, these materials weren’t exactly the ideal fuel sources, and
they could be quite hazardous sometimes. In the 1956 Olympics, the torch in the
final relay was ignited by magnesium and aluminium, but some flaming pieces
fell off and seared the runner’s arms.
To
promote the security rate, liquid fuels made its first appearance at the 1972
Munich Games. Since then, torches have been using fuels which are pressurised
into the form of a liquid. When the fuels are burnt, they turn into gas to
produce a flame. Liquid fuel becomes safer for the runner and can be stored in
a light container. The torch at the 1996 Atlanta Summer Olympics is equipped
with an aluminium base that accommodates a tiny fuel tank. As the fuel ascends
through the modified handle, it is squeezed through a brass valve that has
thousands of little openings. As the fuel passes through the tiny openings, it
accumulates pressure. Once it makes its way through the openings, the pressure
decreases and the liquid becomes gas so it can bum up.
The
torch in 1996 was fuelled by propylene, a type of substance that could give out
a bright flame. However, since propylene was loaded with carbon, it would
produce plenty of smoke which was detrimental to the environment. In 2000, the
designers of the Sydney Olympic torch proposed a lighter and cheaper design,
which was harmless to the environment. For the fuel, they decided to go with a
combination of 35 per cent propane (a gas that is used for cooking and heating)
and 65 per cent butane (a gas that is obtained from petroleum), thus creating a
powerful flame without generating much smoke.
Both
the 1996 and 2000 torches adopted a double flame burning system, enabling the
flames to stay lit even in severe weather conditions. The exterior flame bums
at a slower rate and at a lower temperature. It can be perceived easily with
its big orange flame, but it is unstable. On the other hand, the interior flame
bums faster and hotter, generating a small blue flame with great stability, due
to the internal site offering protection of it from the wind. Accordingly, the
interior flame would serve as a pilot light, which could relight the external
flame if it should go out.
As for the torch of 2002 Olympics in Salt Lake City, the top section was made of glass in which the flame burned, for the purpose of echoing the theme of ‘Light the Fire Within’ of that Olympics. This torch was of great significance for the following designs of the torches.
IELTS Reading Recent Actual Test 04
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-14 which
are based on Reading Passage 1 below.
History of Refrigeration
Refrigeration
is a process of removing heat, which means cooling an area or a substance below
the environmental temperature. Mechanical refrigeration makes use of (he
evaporation of a liquid refrigerant, which goes through a cycle so that it can
be reused. The main cycles include vapour-compression, absorption steam-jet or
steam-ejector, and airing. The term ‘refrigerator’ was first introduced by a
Maryland farmer Thomas Moore in 1803, but it is in the 20th century that the
appliance we know today first appeared.
People
used to find various ways to preserve their food before the advent of
mechanical refrigeration systems. Some preferred using cooling systems of ice
or snow, which meant that diets would have consisted of very little fresh food
or fruits and vegetables, but mostly of bread, cheese and salted meals. For
milk and cheeses, it was very difficult to keep them fresh, so such foods were
usually stored in a cellar or window box. In spite of those measures, they
could not survive rapid spoilage. Later on, people discovered that adding such
chemical as sodium nitrate or potassium nitrate to water could lead to a lower
temperature. In 1550 when this technique was first recorded, people used it to
cool wine, as was the term ‘to refrigerate’. Cooling drinks grew very popular
in Europe by 1600, particularly in Spain, France, and Italy. Instead of cooling
water at night, people used a new technique: rotating long-necked bottles of
water which held dissolved saltpeter. The solution was intended to create very
low temperatures and even to make ice. By the end of the 17th century, iced
drink including frozen juices and liquors tad become extremely fashionable in
France.
People’s
demand for ice soon became strong. Consumers’ soaring requirement for fresh
food, especially for green vegetables, resulted in reform in people’s dieting habits
between 1830 and the American Civil War, accelerated by a drastic expansion of
the urban areas arid the rapid amelioration in an economy of the populace. With
the growth of the cities and towns, he distance between the consumer and the
source of food was enlarged. In 1799s as a commercial product, ice was first
transported out of Canal Street in New York City to Charleston, South Carolina.
Unfortunately, this transportation was not successful because when the ship
reached the destination, little ice left. Frederick Tudor and Nathaniel Wyeth,
two New England’ businessmen, grasped the great potential opportunities for ice
business and managed to improve the storage method of ice in the process of
shipment. The acknowledged ‘Ice King’ in that time, Tudor concentrated his
efforts on bringing he ice to the tropica1 areas. In order to achieve his goal
and guarantee the ice to arrive at the destination safely he tried many
insulating materials in an experiment and successfully constructed the ice
containers, which reduce the ice loss from 66 per cent to less than 8 per cent
at drastically. Wyeth invented an economical and speedy method to cut the ice
into uniform blocks, which had a tremendous positive influence on the ice
industry. Also, he improved the processing techniques for storing, transporting
and distributing ice with less waste.
When
people realised that the ice transported from the distance was not as clean as
previously thought and gradually caused many health problems, it was more
demanding to seek the clean natural sources of ice. To make it worse, by the
1890s water pollution and sewage dumping made clean ice even more unavailable.
The adverse effect first appeared in the blowing industry, and then seriously
spread to such sectors as meat packing and dairy industries. As a result, the
clean, mechanical refrigeration was considerately in need.
Many
inventors with creative ideas took part in the process of inventing
refrigeration, and each version was built on the previous discoveries. Dr
William Cullen initiated to study the evaporation of liquid under the vacuum
conditions in 1720. He soon invented the first man-made refrigerator at the
University of Glasgow in 1748 with the employment of ethyl ether boiling into a
partial vacuum. American inventor Oliver Evans designed the refrigerator
firstly using vapour rather than liquid in 1805. Although his conception was
not put into practice in the end the mechanism was adopted by an American
physician John Gorrie, who made one cooling machine similar to Evans’ in 1842
with the purpose of reducing the temperature of the patient with yellow fever
in a Florida hospital. Until 1851, Evans obtained the first patent for
mechanical refrigeration in the USA. In 1820, Michael Faraday, a Londoner,
first liquefied ammonia to cause cooling. In 1859, Ferdinand Carre from France
invented the first version of the ammonia water cooling machine. In 1873, Carl
von Linde designed the first practical and portable compressor refrigerator in
Munich, and in 1876 he abandoned the methyl ether system and began using
ammonia cycle. Linde later created a new method (‘Linde technique’) for
liquefying large amounts of air in 1894. Nearly a decade later, this mechanical
refrigerating method was adopted subsequently by he meat packing industry in
Chicago.
Since
1840, cars with the refrigerating system had been utilised to deliver and
distribute milk and butter. Until 1860, most seafood and dairy products were
transported with cold-chain logistics. In 1867, refrigerated, railroad cars are
patented to J.B, Sutherland from Detroit, Michigan, who invented insulated cars
by installing the ice bunkers at the end of the cars: air came in from the top,
passed through the bunkers, circulated through the cars by gravity and
controlled by different quantities of hanging flaps which caused different air
temperatures. Depending on the cargo (such as meat, fruits etc.) transported by
the cars, different car designs came into existence. In 1867, the first
refrigerated car to carry fresh fruit was manufactured by Parker Earle of
Illinois, who shipped strawberries on the Illinois Central Railroad. Each chest
was freighted with 100 pounds of ice and 200 quarts of strawberries. Until
1949, the trucking industry began to be equipped with the refrigeration system with
a roof-mounted cooling device, invented by Fred Jones.
From the late 1800s to 1929, the refrigerators employed toxic gases – methyl chloride, ammonia, and sulfur dioxide – as refrigerants. But in the 1920s, a great number of lethal accidents took place due to the leakage of methyl chloride out of refrigerators. Therefore, some American companies started to seek some secure methods of refrigeration. Frigidaire detected a new class of synthetic, refrigerants called halocarbons or CFCs (chlorofluorocarbons) in 1928. this research led to the discovery of chlorofluorocarbons (Freon), which quickly became the prevailing material in compressor refrigerators. Freon was safer for the people in the vicinity, but in 1973 it was discovered to have detrimental effects on the ozone layer. After that, new improvements were made, and Hydrofluorocarbons, with no known harmful effects, was used in the cooling system. Simultaneously, nowadays, Chlorofluorocarbons (CFS) are no longer used; they are announced illegal in several places, making the refrigeration far safer than before.
READING PASSAGE 2
You should spend about 20 minutes on Questions
15-27 which are based on Reading Passage 2 below.
The Evolutionary Mystery: Crocodile Survives
A.
Even though crocodiles have existed for
200 million years, they’re anything but primitive. As crocodiles’ ancestors,
crocodilia came to adapt to an aquatic lifestyle. When most of the other
contemporary reptiles went extinct, crocodiles were able to make it because
their bodies changed and they adapted better to the climate. They witnessed the
rise and fall of the dinosaurs, which once ruled the planet, and even the 65
million years of alleged mammalian dominance didn’t wipe them off. Nowadays,
the crocodiles and alligators are not that different from their prehistoric
ancestors, which proves that they were (and still are) incredibly adaptive.
B.
The first crocodile-like ancestors came
into existence approximately 230 million years ago, and they had many of the
features which make crocodiles natural and perfect stealth hunters: streamlined
body, long tail, protective armour and long jaws. They are bom with four short,
webbed legs, but this does not mean that their capacity to move on the ground
shall ever be underestimated. When they move, they are so fast that you won’t
even have any chance to try making the same mistake again by getting too close,
especially when they’re hunting.
C. Like other reptiles, crocodiles are poikilothermal animals (commonly known as coldblooded, whose body temperature changes with that of the surroundings) and consequently, require exposure to sunlight regularly to raise body temperature. When it is too hot, they would rather stay in water or shade. Compared with mammals and birds, crocodiles have a slower metabolism, which makes them less vulnerable to food shortage. In the most extreme case, a crocodile can slow its metabolism down even further, to the point that it would survive without food for a whole year, enabling them to outlive mammals in relatively volatile environments.
D.
Crocodiles have a highly efficient way to
prey catching. The prey rarely realises there might be a crocodile under the
water because the crocodile makes a move without any noise or great vibration
when spotting its prey. It only keeps its eyes above the water level. As soon
as it feels close enough to the victim, it jerks out of the water with its wide
open jaws. Crocodiles are successful because they are capable of switching
feeding methods. It chases after fish and snatches birds at the water surface,
hides in the waterside bushes in anticipation of a gazelle, and when the chance
to ambush presents itself, the crocodile dashes forward, knocks the animal out
with its powerful tail and then drags the prey into the water to drown.
E.
In many crocodilian habitats, the hot
season brings drought that dries up their hunting grounds, leaving it harder
for them to regulate body temperatures. This actually allowed reptiles to rule.
For instance, many crocodiles can protect themselves by digging holes and covering
themselves in mud, waiting for months without consuming any food or water until
the rains finally return. They transform into a quiescent state called
aestivation.
F.
The majority of crocodilian is considered
to go. into aestivation during the dry season. In a six-year study by Kennett
and Christian, the King Crocodiles, a species of Australian freshwater
crocodiles, spent nearly four months a year underground without access to water
resources. Doubly labelled water was applied to detect field metabolic rates
and water flux, and during some years, plasma fluid samples were taken once a
month to keep track of the effects of aestivation regarding the accumulation of
nitrogenous wastes and electrolyte concentrations.
G. The study discovered that the crocodiles’ metabolic engines function slowly, creating waste and exhausting water and fat reserves. Waste is stored in the urine, becoming more and more concentrated. Nevertheless, the concentration of waste products in blood doesn’t fluctuate much, allowing the crocodiles to carry on their normal functions. Besides, even though the crocodiles lost water reserves and body weight when underground, the losses were proportional; upon emerging, the aestivating animals had no dehydration and displayed no other harmful effects such as a slowed-down growth rate. The two researchers reckon that this capacity of crocodiles to get themselves through the harsh times and the long starvation periods is sure to be the answer to the crocodilian line’s survival throughout history.
READING PASSAGE 3
You should spend about 20 minutes on Questions
28-40 which are based on Reading Passage 3 below.
Elephant Communication
O’
Connell-Rodwell, a postdoctoral fellow at Stanford University, has travelled to
Namibia’s first-ever wildlife reserve to explore the mystical and complicated
realm of elephant communication. She, along with her colleagues, is part of a
scientific revolution that started almost 20 years ago. This revolution has
made a stunning revelation: elephants are capable of communicating with each
other over long distances with low-frequency sounds, also known as infrasounds,
which are too deep for humans to hear.
As
might be expected, African elephants able to detect seismic sound may have
something to do with their ears. The hammer bone in an elephant’s inner ear is
proportionally huge for a mammal, but it is rather normal for animals that use
vibrational signals. Thus, it may be a sign that suggests elephants can use
seismic sounds to communicate.
Other
aspects of elephant anatomy also support that ability. First, their massive
bodies, which enable them to give out low-frequency sounds almost as powerful
as the sound a jet makes during takeoff, serve as ideal frames for receiving
ground vibrations and transmitting them to the inner ear. Second, the
elephant’s toe bones are set on a fatty pad, which might be of help when
focusing vibrations from the ground into the bone. Finally, the elephant has an
enormous brain that sits in the cranial cavity behind the eyes in line with the
auditory canal. The front of the skull is riddled with sinus cavities, which
might function as resonating chambers for ground vibrations.
It
remains unclear how the elephants detect such vibrations, but O’
Connell-Rodwell raises a point that the pachyderms are ‘listening’ with their
trunks and feet instead of their ears. The elephant trunk may just be the most
versatile appendage in nature. Its utilization encompasses drinking, bathing,
smelling, feeding and scratching. Both trunk and feet contain two types of
nerve endings that are sensitive to pressure – one detects infrasonic
vibration, and another responds to vibrations higher in frequencies. As O’
Connell-Rodwell sees, this research has a boundless and unpredictable future.
‘Our work is really interfaced of geophysics, neurophysiology and ecology,’ she
says. ‘We’re raising questions that have never even been considered before.’
It has
been well-known to scientists that seismic communication is widely observed
among small animals, such as spiders, scorpions, insects and quite a lot of
vertebrate species like white-lipped frogs, blind mole rats, kangaroo rats and
golden moles. Nevertheless, O’Connell-Rodwell first argued that a giant land
animal is also sending and receiving seismic signals. ‘I used to lay a male
planthopper on a stem and replay the calling sound of a female, and then the
male one would exhibit the same kind of behaviour that happens in elephants—he
would freeze, then press down on his legs, move forward a little, then stay
still again. I find it so fascinating, and it got me thinking that perhaps
auditory communication is not the only thing that is going on.’
Scientists
have confirmed that an elephant’s capacity to communicate over long distance is
essential for survival, especially in places like Etosha, where more than 2,400
savanna elephants range over a land bigger than New Jersey. It is already
difficult for an elephant to find a mate in such a vast wild land, and the
elephant reproductive biology only complicates it. Breeding herds also adopt
low-frequency sounds to send alerts regarding predators. Even though grown-up
elephants have no enemies else than human beings, baby elephants are vulnerable
and are susceptible to lions and hyenas attack. At the sight of a predator,
older ones in the herd will clump together to form protection before running
away.
We now
know that elephants can respond to warning calls in the air, but can they
detect signals transmitted solely through the ground? To look into that matter,
the research team designed an experiment in 2002, which used electronic devices
that enabled them to give out signals through the ground at Mushara. ‘The
outcomes of our 2002 study revealed that elephants could indeed sense warning
signals through the ground,’ O’Connell-Rodwell observes.
Last year, an experiment was set up in the hope of solving that problem. It used three different recordings—the 1994 warning call from Mushara, an anti-predator call recorded by scientist Joyce Poole in Kenya and a made-up warble tone. ‘The data I’ve observed to this point implies that the elephants were responding the way I always expected. However, the fascinating finding is that the anti-predator call from Kenya, which is unfamiliar to them, caused them to gather around, tense up and rumble aggressively as well—but they didn’t always flee. I didn’t expect the results to be that clear-cut.’
IELTS Reading Recent Actual Test 05
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1 below.
The Pearl
A
The
pearl has always had a special status in the rich and powerful all through the
history. For instance, women from ancient Rome went to bed with pearls on them,
so that they could remind themselves how wealthy they were after waking up.
Pearls used to have more commercial value than diamonds until jewellers learnt
to cut gems. In the eastern countries like Persia, ground pearl powders could
be used as a medicine to cure anything including heart diseases and epilepsy.
B
Pearls
can generally be divided into three categories: natural, cultured and
imitation. When an irritant (such as a grain of sand) gets. inside a certain
type of oyster, mussel, or clam, the mollusc will secrete a fluid as a means of
defence to coat the irritant. Gradually, layers are accumulated around the
irritant until a lustrous natural pearl is formed.
C
A
cultured pearl undergoes the same process. There is only one difference between
cultured pearls and natural ones: in cultured pearls, the irritant is a head
called ‘mother of pearl’ and is placed in the oyster through surgical
implantation. This results in much larger cores in cultivated pearls than those
in natural pearls. As long as there are enough layers of nacre (the secreted
fluid covering the irritant) to create a gorgeous, gem-quality pearl; the size
of the nucleus wouldn’t make a difference to beauty or durability.
D
Pearls can come from both salt and freshwater sources. Typically, pearls from salt water usually have high quality, although several freshwater pearls are considered high in quality, too. In addition, freshwater pearls often have irregular shapes, with a puffed rice appearance. Nevertheless, it is the individual merits that determine the pearl’s value more than the sources of pearls. Saltwater pearl oysters are usually cultivated in protected lagoons or volcanic atolls, while most freshwater cultured pearls sold today come from China. There are a number of options for producing cultured pearls: use fresh water or sea water shells, transplant the graft into the mantle or into the gonad, add a spherical bead or do it nonbeaded.
E
No
matter which method is used to get pearls, the process usually takes several
years. Mussels must reach a mature age, which may take up almost three years,
and then be transplanted an irritant. When the irritant is put in place, it
takes approximately another three years for a pearl to reach its full size.
Sometimes, the irritant may be rejected. As a result, the pearl may be
seriously deformed, or the oyster may directly die from such numerous
complications as diseases. At the end of a 5- to 10-year circle, only half of
the oysters may have made it through. Among the pearls that are actually
produced in the end, only about 5% of them will be high-quality enough for the
jewellery makers.
F
Imitation
pearls are of another different story. The Island of Mallorca in Spain is
renowned for its imitation pearl industry. In most cases, a bead is dipped into
a solution made from fish scales. But this coating is quite thin and often
wears off. One way to distinguish the imitation pearls is to have a bite on it.
Fake pearls glide through your teeth, while the layers of nacre on the real
pearls feel gritty.
G
Several
factors are taken into account to evaluate a pearl: size, shape, Colour, the
quality of surface and luster. Generally, the three types of pearls come in
such order (with the value decreasing): natural pearls, cultured pearls and
imitation pearls (which basically are worthless). For jewellers, one way to
tell whether a pearl is natural or cultured is to send it to a gem lab and
perform an X-ray on it. High-quality natural pearls are extremely rare. Japan’s
Akoya pearls are one of the glossiest pearls out there, while the’ south sea
water of Australia is a cradle to bigger pearls.
H
Historically, the pearls with the highest quality around the globe are found in the Persian Gulf, particularly around Bahrain. These pearls have to be hand-harvested by divers with no advanced equipment. Unfortunately, when the large reserve of oil was discovered in the early 1930s, Persian Gulf’s natural pearl industry came to a sudden end because the contaminated water destroyed the once pristine pearls. In the present days, India probably has the largest stock of natural pearls. However, it is quite an irony that a large part of India’s slock of natural pearls are originally from Bahrain.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-26 which are based on Reading Passage 2 below.
How deserts are formed?
A
A
desert refers to a barren section of land, mainly in arid and semi-arid areas,
where there is almost no precipitation, and the environment is hostile for any
creature to inhabit. Deserts have been classified in a number of ways,
generally combining total precipitation, how many days the rainfall occurs,
temperature, humidity, and sometimes additional factors. In some places,
deserts have clear boundaries marked by rivers, mountains or other landforms,
while in other places, there are no clear-cut borders between desert and other
landscape features.
B
In arid
areas where there is not any covering of vegetation protecting the land, sand
and dust storms will frequently take place. This phenomenon often occurs along
the desert margins instead of within the deserts, where there are already no
finer materials left. When a steady wind starts to blow, fine particles on the
open ground will begin vibrating. As the wind picks up, some of the particles
are lifted into the air. When they fall onto the ground, they hit other
particles which will then be jerked into the air in their turn, initiating a
chain reaction.
C
There has been a tremendous deal of publicity on how severe desertification can be, but the academic circle has never agreed on the causes of desertification. A common misunderstanding is that a shortage of precipitation causes the desertification—even the land in some barren areas will soon recover after the rain falls. In fact, more often than not, human activities are responsible for desertification. It might be true that the explosion in world population, especially in developing countries, is the primary cause of soil degradation and desertification. Since the population has become denser, the cultivation of crops has gone into progressively drier areas. It’s especially possible for these regions to go through periods of severe drought, which explains why crop failures are common. The raising of most crops requires the natural vegetation cover to be removed first; when crop failures occur, extensive tracts of land are devoid of a plant cover and thus susceptible to wind and water erosion. All through the 1990s, dryland areas went through a population growth of 18.5 per cent, mostly in severely impoverished developing countries.
D
Livestock
farming in semi-arid areas accelerates the erosion of soil and becomes one of
the reasons for advancing desertification. In such areas where the vegetation
is dominated by grasses, the breeding of livestock is a major economic
activity. Grasses are necessary for anchoring barren topsoil in a dryland area.
When a specific field is used to graze an excessive herd, it will experience a
loss in vegetation coverage, and the soil will be trampled as well as be
pulverised, leaving the topsoil exposed to destructive erosion elements such as
winds and unexpected thunderstorms. For centuries, nomads have grazed their
flocks and herds to any place where pasture can be found, and oases have
offered chances for a more settled way of living. For some nomads, wherever
they move to, the desert follows.
E
Trees
are of great importance when it comes to maintaining topsoil and slowing down
the wind speed. In many Asian countries, firewood is the chief fuel used for
cooking and heating, which has caused uncontrolled clear-cutting of forests in
dryland ecosystems. When too many trees are cut down, windstorms and dust
storms tend to occur.
F
What’s
worse, even political conflicts and wars can also contribute to
desertification. To escape from the invading enemies, the refugees will move
altogether into some of the most vulnerable ecosystems on the planet. They
bring along their cultivation traditions, which might not be the right kind of
practice for their new settlement.
G
In the
20th century, one of the states of America had a large section of farmland that
had turned into desert. Since then, actions have been enforced so that such a
phenomenon of desertification will not happen again. To avoid the reoccurring
of desertification, people shall find other livelihoods which do not rely on
traditional land uses, are not as demanding on local land and natural resource,
but can still generate viable income. Such livelihoods include but are not
limited to dryland aquaculture for the raising of fish, crustaceans and
industrial compounds derived from microalgae, greenhouse agriculture, and
activities that are related to tourism. Another way to prevent the reoccurring
of desertification is bringing about economic prospects in the city centres of
drylands and places outside drylands. Changing the general economic and
institutional structures that generate new chances for people to support
themselves would alleviate the current pressures accompanying the desertification
processes.
H
In
nowadays society, new technologies are serving as a method to resolve the
problems brought by desertification. Satellites have been utilised to
investigate the influence that people and livestock have on our planet Earth.
Nevertheless, it doesn’t mean that alternative technologies are not needed to
help with the problems and process of desertification.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3 below.
Can Hurricanes be Moderated or Diverted?
A
Each
year, massive swirling storms bringing along winds greater than 74 miles per
hour wipe across tropical oceans and land on shorelines—usually devastating
vast swaths of territory. When these roiling tempests strike densely inhabited
territories, they have the power to kill thousands and cause property damage
worth of billions of dollars. Besides, absolutely nothing stands in their way.
But can we ever find a way to control these formidable forces of nature?
B
To see
why hurricanes and other severe tropical storms may be susceptible to human
intervention, a researcher must first learn about their nature and origins.
Hurricanes grow in the form of thunderstorm clusters above the tropical seas.
Oceans in low-latitude areas never stop giving out heat and moisture to the
atmosphere, which brings about warm, wet air above the sea surface. When this
kind of air rises, the water vapour in it condenses to form clouds and
precipitation. Condensation gives out heat in the process the solar heat is
used to evaporate the water at the ocean surface. This so-called invisible heat
of condensation makes the air more buoyant, leading to it ascending higher
while reinforcing itself in the feedback process. At last, the tropical
depression starts to form and grow stronger, creating the familiar eye — the
calm centre hub that a hurricane spins around. When reaching the land, the
hurricane no longer has a continuous supply of warm water, which causes it to
swiftly weaken.
C
Our current studies are inspired by my past intuition when I was learning about chaos theory 30 years ago. The reason why long-range forecasting is complicated is that the atmosphere is highly sensitive to small influences and tiny mistakes can compound fast in the weather-forecasting models. However, this sensitivity also made me realise a possibility: if we intentionally applied some slight inputs to a hurricane, we might create a strong influence that could affect the storms, either by steering them away from densely populated areas or by slowing them down. Back then, I was not able to test my ideas, but thanks to the advancement of computer simulation and remote-sensing technologies over the last 10 years, I can now renew my enthusiasm in large-scale weather control.
D
To find
out whether the sensitivity of the atmospheric system could be exploited
to adjust such robust atmospheric phenomena as hurricanes, our research
team ran simulation experiments on computers for a hurricane named Iniki that
occurred in 1992. The current forecasting technologies were far from perfect,
so it took us by surprise that our first simulation turned out to be an
immediate success. With the goal of altering the path of Iniki in mind, we
first picked the spot where we wanted the storm to stop after six hours. Then
we used this target to generate artificial observations and put these into the
computer model.
E
The
most significant alteration turned out to be the initial temperatures and
winds. Usually, the temperature changes across the grid were only tenths of a
degree, but the most noteworthy change, which was an increase of almost two
degrees Celsius, took place in the lowest model layer to the west of the storm
centre. The calculations produced wind-speed changes of two or three miles per
hour. However, in several spots, the rates shifted by as much as 20 mph due to
minor redirections of the winds close to the storm’s centre. In terms of
structure, the initial and altered versions of Hurricane Iniki seemed almost
the same, but the changes in critical variables were so substantial that the
latter one went off the track to the west during the first six hours of the
simulation and then travelled due north, leaving Kauai untouched.
F
Future
earth-orbiting solar power stations, equipped with large mirrors to focus the
sun’s rays and panels of photovoltaic cells to gather and send energy to the
Earth, might be adapted to beam microwaves which turn to be absorbed by water
vapour molecules inside or around the storm. The microwaves would cause the
water molecules to vibrate and heat up the surrounding air, which then leads to
the hurricane slowing down or moving in a preferred direction.
G
Simulations of hurricanes conducted on a computer have implied that by changing the precipitation, evaporation and air temperature, we could make a difference to a storm’s route or abate its winds. Intervention could be in many different forms: exquisitely targeted clouds bearing silver iodide or other rainfall-inducing elements might deprive a hurricane of the water it needs to grow and multiply from its formidable eyewall, which is the essential characteristic of a severe tropical storm.
IELTS Reading Recent Actual Test 06
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1 below.
Education Philosophy
A.
Although we lack accurate statistics about
child mortality in the pre-industrial period, we do have evidence that in the 1660s,
the mortality rate for children who died within 14 days of birth was as much as
30 per cent. Nearly all families suffered some premature death. Since all
parents expected to bury some of their children, they found it difficult to
invest in their newborn children. Moreover, to protect themselves from the
emotional consequences of children’s death, parents avoided making any
emotional commitment to an infant. It is no wonder that we find mothers leave
their babies in gutters or refer to the death in the same paragraph with
reference to pickles.
B.
The 18th century witnessed the
transformation from an agrarian economy to an industrial one, one of the vital
social changes taking place in the Western world. An increasing number of
people moved from their villages and small towns to big cities where life was
quite different. Social supports which had previously existed in smaller
communities were replaced by ruthless problems such as poverty, crime,
substandard housing and disease. Due to the need for additional income to
support the family, young children from the poorest families were forced into
early employment and thus their childhood became painfully short. Children as
young as 7 might be required to work full-time, subjected to unpleasant and
unhealthy circumstances, from factories to prostitution. Although such a role
has disappeared in most wealthy countries, the practice of childhood employment
still remains a staple in underdeveloped countries and rarely disappeared
entirely.
C.
The lives of children underwent a drastic
change during the 1800s in the United States. Previously, children from both
rural and urban families were expected to participate in everyday labour due to
the bulk of manual hard working. Nevertheless, thanks to the technological
advances of the mid-1800s, coupled with the rise of the middle class and
redefinition of roles of family members, work and home became less synonymous
over time. People began to purchase toys and books for their children. When the
country depended more upon machines, children in rural and urban areas, were
less likely to be required to work at home. Beginning from the Industrial
Revolution and rising slowly over the course of the 19th century, this trend
increased exponentially after civil war. John Locke, one of the most
influential writers of his period, created the first clear and comprehensive
statement of the ‘environmental position’ that family education determines a
child’s life, and via this, he became the father of modem learning theory.
During the colonial period, his teachings about child care gained a lot of
recognition in America.
D.
According to Jean Jacques Rousseau, who
lived in an era of the American and French Revolution, people were ‘noble
savages’ in the original state of nature, meaning they are innocent, free and
uncorrupted. In 1762, Rousseau wrote a famous novel Emile to convey his
educational philosophy through a story of a boy’s education from infancy to
adult-hood. This work was based on his extensive observation of children and
adolescents, their individuality, his developmental theory and on the memories
of his own childhood. He contrasts children with adults and describes their
age-specific characteristics in terms of historical perspective and
developmental psychology. Johan Heinrich Pestalozzi, living during the early
stages of the Industrial Revolution, sought to develop schools to nurture
children’s all-round development. He agreed with Rousseau that humans are
naturally good but were spoiled by a corrupt society. His approach to teaching
consists of the general and special methods, and his theory was based upon
establishing an emotionally healthy homelike learning environment, which had to
be in place before more specific instructions occurred.
E.
One of the best-documented cases of
Pestalozzi’s theory concerned a so-called feral child named Victor, who was
captured in a small town in the south of France in 1800. Prepubescent, mute,
naked, and perhaps 11 or 12 years old, Victor had been seen foraging for food
in the gardens of the locals in the area and sometimes accepted people’s direct
offers of food before his final capture. Eventually, he was brought to Paris
and expected to answer some profound questions about the nature of human, but
that goal was quashed very soon. A young physician Jean Marc Gaspard Itard was
optimistic about the future of Victor and initiated a five-year education plan
to civilise him and teach him to speak. With a subsidy from the government,
Itard recruited a local woman Madame Guerin to assist him to provide a semblance
of a home for Victor, and he spent an enormous amount of time and effort
working with Victor. Itard’s goal to teach Victor the basics of speech could
never be fully achieved, but Victor had learnt some elementary forms of
communication.
F. Although other educators were beginning to recognise the simple truth embedded in Rousseau’s philosophy, it is not enough to identify the stages of children’s development alone. There must be certain education which had to be geared towards those stages. One of the early examples was the invention of kindergarten, which was a word and a movement created by a German-born educator, Friedrich Froebel in 1840. Froebel placed a high value on the importance of play in children’s learning. His invention would spread around the world eventually in a verity of forms. Froebel’s ideas were inspired through his cooperation with Johann Heinrich Pestalozzi. Froebel didn’t introduce the notion of kindergarten until 58 years old, and he had been a teacher for four decades. The notion was a haven and a preparation for children who were about to enter the regimented educational system. The use of guided or structured play was a cornerstone of his kindergarten education because he believed that play was the most significant aspect of development at this time of life. Play served as a mechanism for a child to grow emotionally and to achieve a sense of self-worth. Meanwhile, teachers served to organise materials and a structured environment in which each child, as an individual, could achieve these goals. When Froebel died in 1852, dozens of kindergartens had been created in Germany. Kindergartens began to increase in Europe, and the movement eventually reached and flourished in the United States in the 20th century.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-27 which are based on Reading Passage 2 below.
The
start of the automobile’s history went all the way back to 1769 when
automobiles running on the steam engine were invented as carriers for human
transport. In 1806, the first batch of cars powered by an internal combustion
engine came into being, which pioneered the introduction of the widespread
modem petrol-fueled internal combustion engine in 1885.
It is
generally acknowledged that the first practical automobiles equipped with
petrol/gaso-line-powered internal combustion engines were invented almost at
the same time by different German inventors who were Working on their own. Karl
Benz first built the automobile in 1885 in Mannheim. Benz attained a patent for
his invention on 29 January 1886, and in 1888, he started to produce
automobiles in a company that later became the renowned Mercedes-Benz.
As this
century began, the automobile industry marched into the transportation market
for the wealth. Drivers at that time were an adventurous bunch; they would go
out regardless of the weather condition even if they weren’t even protected by
an enclosed body or a convertible top. Everybody in the community knew who
owned what car, and cars immediately became a symbol of identity and status.
Later, cars became more popular among the public since it allowed people to
travel whenever and wherever they wanted. Thus, the price of automobiles in
Europe and North America kept dropping, and more people from the middle class could
afford them. This was especially attributed to Henry Ford who did two crucial
things. First, he set the price as reasonable as possible for his cars; second,
he paid his employees enough salaries so that they could afford the cars made
by their very own hands.
The
trend of interchangeable parts and mass production in an assembly line style
had been led by America, and from 1914, this concept was significantly
reinforced by Henry Ford. This large-scale, production-line manufacture of
affordable automobiles was debuted. A Ford car would come off all assembled
from the line every 15 minutes, an interval shorter than any of the former
methods. Not only did it raise productivity, but also cut down on the
requirement for manpower. Ford significantly lowered the chance of injury by
carrying out complicated safety procedures in production—particularly assigning
workers to specific locations rather than giving them the freedom to wander
around. This mixture of high wages and high efficiency was known as Fordism, which
provided a valuable lesson for most major industries.
The
first Jeep automobile that came out as the prototype Bantam BRC was the primary
light 4-wheel-drive automobile of the U.S. Army and Allies, and during World
War II and the postwar period, its sale skyrocketed. Since then, plenty of Jeep
derivatives with similar military and civilian functions have been created and
kept upgraded in terms of overall performance in other nations.
Through
all the 1950s, engine power and automobile rates grew higher, designs evolved
into a more integrated and artful form, and cars were spreading globally. In
the 1960s, the landscape changed as Detroit was confronted with foreign
competition. The European manufacturers, used the latest technology, and Japan
came into the picture as a dedicated car-making country. General Motors,
Chrysler, and Ford dabbled with radical tiny cars such as the GM A-bodies with
little success. As joint ventures such as the British Motor Corporation unified
the market, captive imports and badge imports swept all over the US and the UK.
BMC first launched a revolutionary space-friendly Mini in 1959, which turned
out to harvest large global sales. Previously remaining under the Austin and
Morris names, Mini later became an individual marque in 1969. The trend of
corporate consolidation landed in Italy when niche makers such as Maserati,
Ferrari, and Lancia were bought by larger enterprises. By the end of the 20th
century, there had been a sharp fall in the number of automobile marques.
In the
US, car performance dominated marketing, justified by the typical cases of pony
cars and muscle cars. However, in the 1970s, everything changed as the American
automobile industry suffered from the 1973 oil crisis, competition with
Japanese and European imports, automobile emission-control regulations* and
moribund innovation. The irony in all this was that full-size sedans such as
Cadillac and Lincoln scored a huge comeback between the years of economic
crisis.
In
terms of technology, the most mentionable developments that postwar era had
seen were the widespread use of independent suspensions, broader application of
fuel injection, and a growing emphasis on safety in automobile design. Mazda
achieved many triumphs with its engine firstly installed in the fore-wheel,
though it gained itself a reputation as a gas-guzzler.
The modem era also has witnessed a sharp elevation of fuel power in the modem engine management system with the. help of the computer. Nowadays, most automobiles in use are powered by an internal combustion engine, fueled by gasoline or diesel. Toxic gas from both fuels is known to pollute the air and is responsible for climate change as well as global warming.
READING PASSAGE 3
You should spend about 20 minutes on Questions
28-40 which are based on Reading Passage 3 below.
Company Innovation
A.
In a shabby office in downtown Manhattan,
a group of 30 AI (artificial intelligence) programmers from Umagic are
attempting to mimic the brains of a famous sexologist, a celebrated dietitian,
a popular fitness coach and a bunch of other specialists, Umagic Systems is an
up-and-coming firm, which sets up websites that enable their clients to seek
advice from the virtual versions of those figures. The users put in all the
information regarding themselves and their objectives; then it’s Umagic’s job
to give advice, that a star expert would give. Even though the neuroses of
American consumers have always been a marketing focus, the future of Umagic is
difficult to predict (who knows what it’ll be like in ten years? Asking a
computer about your sex life might be either normal or crazy). However,
companies such, as Umagic1 are starting .to intimidate major American firms,
because these young companies regard the half-crazy ‘creative’ ideas as the
portal lo their triumph m the future.
B.
Innovation has established itself as the
catchword of American business management Enterprises have realised that they
are running out of things that can be outsourced or re-engineered (worryingly,
by their competitors too) Winners of today’s American business tend to be
companies with innovative powers such as Dell, Amazon and Wal-Mart, which have
come up with concepts or goods that have reshaped their industries.
C.
According to a new book by two consultants
from Arthur D. Little, during the last 15 years, the top 20% of firms in
Fortune magazine’s annual innovation survey have attained twice as much the
shareholder returns as their peers. The desperate search for new ideas is the
hormone for a large part of today’s merger boom. The same goes for the money
spent on licensing and purchasing others’ intellectual property. Based on the
statistics from Pasadena-based Patent & Licence Exchange, trade volume in
intangible assets in America has gone up from $15 billion in 1990 to $100
billion in 1998, with small firms and individuals taking up an increasing share
of the rewards.
D. And that terrifies big companies: it appears that innovation works incompatible with them. Some major famous companies that are always known for ‘innovative ideas’, such as 3M, Procter & Gamble, and Rubbermaid, have recently had dry spells. Peter Chernin, who runs the Fox TV and film empire for News Corporation, points out that ‘In the management of creativity, size is your enemy.’ It’s impossible for someone who’s managing 20 movies to be as involved as someone doing 5. Therefore, he has tried to divide the studio into smaller parts, disregarding the risk of higher expenses.
E.
Nowadays, ideas are more likely to prosper
outside big companies. In the old days, when a brilliant scientist came up with
an idea and wanted to make money out of it, he would take it to a big company
first. But now, with all these cheap venture capital around, he would probably
want to commercialise it by himself. So far, Umagic has already raised $5m and
is on its way to another $25m. Even in the case of capital-intensive businesses
like pharmaceuticals, entrepreneurs have the option to conduct early-stage
research and sell out to the big firms when they’re faced with costly, risky
clinical trials. Approximately 1/3 of drug firms’ total revenue is now from
licensed-in technology.
F.
Some of the major enterprises such as
General Electric and Cisco have been impressively triumphant when it comes to
snatching and incorporating small companies’ scores. However, other grants are
concerned about the money they have to spend and the way to keep those geniuses
who generated the idea. It is the dream of everyone to develop more ideas
within their organisations Procter & Gamble is currently switching their
entire business focus from countries to products; one of the goals is to get
the whole company to accept the innovations. In other places, the craving for
innovation has caused a frenzy lor intrapreneurship’ transferring power and
establishing internal idea-workshops and tracking inventory so that the talents
will stay.
G.
Some people don’t believe that this kind
of restructuring is sufficient. Clayton Christensen argues in new book that big
firms’ many advantages, such as taking care of their existing customers, can
get in the way of innovative behaviour that is necessary for handling
disruptive technologies That’s why there’s been the trend of cannibalisation,
which brings about businesses that will confront and jeopardise the existing
ones. For example, Bank One has set up Wingspan, which is an online bank that
in fact compete, with its actual branches.
H. There’s no denying that innovation is a big deal. However, do major firms have to be this pessimistic? According to a recent survey of the to 50 innovations in America by Industry Week, ide as are equally likely to come from both big and small companies. Big companies can adopt new ideas when they are mature enough and the risks and rewards have become more quantifiable.
IELTS Reading Recent Actual Test 07
READING PASSAGE 1
You
should spend about 20 minutes on Questions 1-13 which are based on Reading
Passage 1 below.
The Extraordinary Watkin Tench
At the
end of 18th century, life for the average British citizen was changing. The
population grew as health and industrialisation took hold of the country.
However, land and resources were limited. Families could not guarantee jobs for
all of their children. People who were poor or destitute had little option. To
make things worse, the rate of people who turned to crime to make a living
increased. In Britain, the prisons were no longer large enough to hold the
convicted people of this growing criminal class. Many towns and governments
were at a loss as to what to do. However, another phenomenon that was happening
in the 18th century was I exploration of other continents. There were many
ships looking for crew members who would risk a month-long voyage across a vast
ocean. This job was risky and dangerous, so few would willingly choose it.
However, with so many citizens without jobs or with criminal convictions, they
had little choice. One such member of this new lower class of British citizens
was Watkin Tench. Between 1788 and 1868, approximately 161,700 convicts were
transported to the Australian colonies of New South Wales, Van Diemen’s land
and Western Australia. Tench was one of these unlucky convicts to sign onto a
dangerous journey. When his ship set out in 1788, he signed a three years’
service to the First Fleet.
Apart
from his years in Australia, people knew little about his life back in Britain.
It was said he was born on 6 October 1758 at Chester in the county of Cheshire
in England. He came from a decent background. Tench was a son of Fisher Tench,
a dancing master who ran a boarding school in the town and Margaritta Tarleton
of the Liverpool Tarletons. He grew up around a finer class of British
citizens, and his family helped instruct the children of the wealthy in formal
dance lessons. Though we don’t know for sure how Tench was educated in this
small British town, we do know that he was well educated. His diaries from his
travels to Australia are written in excellent English, a skill that not
everyone was lucky to possess in the 18th century. Aside from this, we know
little of Tench’s beginnings. We don’t know how he ended up convicted of a
crime. But after he started his voyage, his life changed dramatically.
During
the voyage, which was harsh and took many months, Tench described landscape of
different places. While sailing to Australia, Tench saw landscapes that were
unfamiliar and new to him. Arriving in Australia, the entire crew was uncertain
of what was to come in their new life. When they arrived in Australia, they
established a British colony. Governor Philip was vested with complete
authority over the inhabitants of the colony. Though still a young man, Philip
was enlightened for his age. From stories of other British colonies, Philip
learnt that conflict with the original peoples of the land was often a source
of strife and difficulties. To avoid this, Philip’s personal intent was to
establish harmonious relations with local Aboriginal people. But Philip’s job
was even more difficult considering his crew. Other colonies were established
with middle-class merchants and craftsmen. His crew were convicts, who had few
other skills outside of their criminal histories. Along with making peace with
the Aboriginal people, Philip also had to try to reform as well as discipline
the convicts of the colony.
From
the beginning, Tench stood out as different from the other convicts. During his
initial time in Australia, he quickly rose in his rank, and was given extra
power and responsibility over the convicted crew members. However, he was also
still very different from the upper-class rulers who came to rule over the
crew. He showed humanity towards the convicted workers. He didn’t want to treat
them as common criminals, but as trained military men. Under Tench’s authority,
he released the convicts’ chains which were used to control them during the
voyage. Tench also showed mercy towards the Aboriginal people. Governor Philip
often pursued violent solutions to conflicts with the Aboriginal peoples. Tench
disagreed strongly with this method. At one point, he was unable to follow the
order given by the Governor Philip to punish the ten Aboriginals.
When
they first arrived, Tench was fearful and contemptuous towards the Aboriginals,
because the two cultures did not understand each other. However, gradually he
got to know them individually and became close friends with them. Tench knew
that the Aboriginal people would not cause them conflict if they looked for a
peaceful solution. Though there continued to be conflict and violence, Tench’s
efforts helped establish a more peaceful negotiation between the two groups
when they settled territory and land-use issues.
Meanwhile,
many changes were made to the new colony. The Hawkesbury River was named by
Governor Philip in June 1789. Many native bird species to the river were hunted
by travelling colonists. The colonists were having a great impact on the land
and natural resources. Though the colonists had made a lot of progress in the
untamed lands of Australia, there were still limits. The convicts were
notoriously ill-informed about Australian geography, as was evident in the
attempt by twenty absconders to walk from Sydney to China in 1791, believing:
“China might be easily reached, being not more than a hundred miles distant,
and separated only by a river.” In reality, miles of ocean separated the two.
Much of Australia was unexplored by the convicts. Even Tench had little understanding of what existed beyond the established lines of their colony. Slowly, but surely, the colonists expanded into the surrounding area. A few days after arrival at Botany Bay, their original location, the fleet moved to the more suitable Port Jackson where a settlement was established at Sydney Cove on 26 January 1788. This second location was strange and unfamiliar, and the fleet was on alert for any kind of suspicious behaviors. Though Tench had made friends in Botany Bay with Aboriginal peoples, he could not be sure this new land would be uninhabited. He recalled the first time he stepped into this unfamiliar ground with a boy who helped Tench navigate. In these new lands, he met an old Aboriginal.
READING PASSAGE 2
You
should spend about 20 minutes on Questions 14-26 which are based on Reading
Passage 2
Stress of Workplace
A.
How busy is too busy? For some it means
having to miss the occasional long lunch; for others it means missing lunch
altogether. For a few, it is hot being able to take a “sickie” once a month.
Then there is a group of people for whom working every evening and weekend is
normal, and franticness is the tempo of their lives. For most senior
executives, workloads swing between extremely busy and frenzied. The
vice-president of the management consultancy AT Kearney and its head of
telecommunications for the Asia-Pacific region, Neil Plumridge, says his work
weeks vary from a “manageable” 45 hours to 80 hours, but average 60 hours.
B.
Three warning signs alert Plumridge about
his workload: sleep, scheduling and family. He knows he has too much on when he
gets less than six hours of sleep for three consecutive nights; when he is
constantly having to reschedule appointments; “and the third one is on the
family side”, says Plumridge, the father of a three-year-old daughter, and expecting
a second child in October. “If I happen to miss a birthday or anniversary, I
know things are out of control.” Being “too busy” is highly subjective. But for
any individual, the perception of being too busy over a prolonged period can
start showing up as stress: disturbed sleep, and declining mental and physical
health. National workers’ compensation figures show stress causes the most lost
time of any workplace injury. Employees suffering stress are off work an
average of 16.6 weeks. The effects of stress are also expensive. Comcare, the
Federal Government insurer, reports that in 2003-04, claims for psychological
injury accounted for 7% of claims but almost 27% of claim costs. Experts say
the key to dealing with stress is not to focus on relief—a game of golf or a
massage-—but to reassess workloads. Neil Plumridge says he makes it a priority
to work out what has to change; that might mean allocating extra resources to a
job, allowing more time or changing expectations. The decision may take several
days. He also relies on the advice of colleagues, saying his peers coach each
other with business problems. “Just a fresh pair of eyes over an issue can
help,” he says.
C.
Executive stress is not confined to big
organisations. Vanessa Stoykov has been running her own advertising and public
relations business for seven years, specialising in work for financial and
professional services firms. Evolution Media has grown so fast that it debuted
on the BRW Fast 100 list of fastest-growing small enterprises last year—just
after Stoykov had her first child. Stoykov thrives on the mental stimulation of
running her own business. “Like everyone, I have the occasional day when I
think my head’s going to blow off,” she says. Because of the growth phase the
business is in, Stoykov has to concentrate on short-term stress relief—weekends
in the mountains, the occasional “mental health” day—rather than delegating
more work. She says: “We’re hiring more people, but you need to train them,
teach them about the culture and the clients, so it’s actually more work rather
than less.”
D.
Identify the causes: Jan Eisner, Melbourne
psychologist who specialises in executive coaching, says thriving on a
demanding workload is typical of senior executives and other high-potential
business adrenalin periods followed by quieter patches, while others thrive
under sustained pressure. “We could take urine and blood hormonal measures and
pass a judgement of whether someone’s physiologically stressed or not,” she
says. “But that’s not going to give us an indicator of what their experience of
stress is, and what the emotional and cognitive impacts of stress are going to
be.”
E.
Eisner’s practice is informed by a
movement known as positive psychology, a school of thought that argues
“positive” experiences—feeling engaged, challenged, and that one is making a
contribution to something meaningful—do not balance out negative ones such as
stress; instead, they help people increase their resilience over time. Good
stress, or positive experiences of being challenged and rewarded, is thus
cumulative in the same way as bad stress. Eisner says many of the senior
business people she coaches are relying more on regulating bad stress through
methods such as meditation and yoga. She points to research showing that meditation
can alter the biochemistry of the brain and actually help people “retrain” the
way their brains and bodies react to stress. “Meditation and yoga enable you to
shift the way that your brain reacts, so if you get proficient at it you’re in
control.”
F.
Recent research, such as last year’s study
of public servants by the British epidemiologist Sir Michael Marmot, shows the
most important predictor of stress is the level of job control a person has.
This debunks the theory that stress is the prerogative of high-achieving
executives with type-A personalities and crazy working hours. Instead, Marmot’s
and other research reveals they have the best kind of job: one that combines
high demands (challenging work) with high control (autonomy). “The worst jobs
are those that combine high demands and low control. People with demanding jobs
but little autonomy have up to four times the probability of depression and
more than double the risk of heart disease,” LaMontagne says. “Those two alone
count for an enormous part of chronic diseases, and they represent a
potentially preventable part.” Overseas, particularly in Europe, such research
is leading companies to redesign organisational practices to increase
employees’ autonomy, cutting absenteeism and lifting productivity.
G.
The Australian vice-president of AT
Kearney, Neil Plumridge says, “Often stress is caused by our setting
unrealistic expectations of ourselves. I’ll promise a client I’ll do something
tomorrow, and then [promise] another client the same thing, when I really know
it’s not going to happen. I’ve put stress on myself when I could have said to
the clients: Why don’t I give that to you in 48 hours?’ The client doesn’t
care.” Overcommitting is something people experience as an individual problem.
We explain it as the result of procrastination or Parkinson’s law: that work
expands to fdl the time available. New research indicates that people may be
hard-wired to do it.
H. A study in the February issue of the Journal of Experimental Psychology shows that people always believe they will be less busy in the future than now. This is a misapprehension, according to the authors of the report, Professor Gal Zauberman, of the University of North Carolina, and Professor John Lynch, of Duke University. “On average, an individual will be just as busy two weeks or a month from now as he or she is today. But that is not how it appears to be in everyday life,” they wrote. “People often make commitments long in advance that they would never make if the same commitments required immediate action. That is, they discount future time investments relatively steeply.” Why do we perceive a greater “surplus” of time in the future than in the present? The researchers suggest that people underestimate completion times for tasks stretching into the future, and that they are bad at imagining future competition for their time.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3 below.
Improving Patient Safety
Packaging
One of
the most prominent design issues in pharmacy is that of drag packaging and
patient information leaflets (Pits). Many letters have appeared in The
Journal’s letters pages over the years from pharmacists dismayed at the designs
of packaging that are “accidents waiting to happen”.
Packaging
design in the pharmaceutical industry is handled by either in-house teams or
design agencies. Designs for over-the-counter medicines, where characteristics
such as attractiveness and distinguish-ability are regarded as significant, are
usually commissioned from design agencies. A marketing team will prepare a
brief and the designers will come up with perhaps six or seven designs. These
are whittled down to two or three that might be tested on a consumer group. In
contrast, most designs for prescription-only products are created in-house. In
some cases, this may simply involve applying a company’s house design (ie,
logo, colour, font, etc). The chosen design is then handed over to design
engineers who work out how the packaging will be produced.
Design
considerations
The
author of the recently published “Information design for patient safety,” Thea
Swayne, tracked the journey of a medicine from manufacturing plant, through
distribution warehouses, pharmacies and hospital wards, to patients’ homes. Her
book highlights a multitude of design problems with current packaging, such as
look-alikes and sound-alikes, small type sizes and glare on blister foils.
Situations in which medicines are used include a parent giving a cough medicine
to a child in the middle of the night and a busy pharmacist selecting one box
from hundreds. It is argued that packaging should be designed for moments such
as these. “Manufacturers are not aware of the complex situations into which
products go. As designers, we are interested in not what is supposed to happen
in hospital wards, but what happens in the real world,” Ms Swayne said.
Incidents
where vein has been injected intrathecally instead of spine are a classic
example of how poor design can contribute to harm. Investigations following
these tragedies have attributed some blame to poor typescript.
Safety
and compliance
Child
protection is another area that gives designers opportunities to improve
safety. According to the Child Accident Prevention Trust, seven out of 10
children admitted to hospital with suspected poisoning have swallowed
medicines. Although child-resistant closures have reduced the number of
incidents, they are not: fully child-proof. The definition of such a closure is
one that not more than 15 percent of children aged between 42 and 51 months can
open within five minutes. There is scope for improving what is currently
available, according to Richard Mawle, a freelance product designer. “Many
child-resistant packs
are
based on strength. They do not necessarily prevent a child from access, but may
prevent people with a disability,” he told The Journal. “The legal requirements
are there for a good reason, but they are not good enough in terms of the
users,” he said. “Older people, especially those with arthritis, may have the
same level of strength as a child,” he explained, and suggested that better
designs could rely on cognitive skills (eg, making the opening of a container a
three-step process) or be based on the physical size of hands.
Mr. Mawle worked with GlaxoSmithKline on a project to improve compliance through design, which involved applying his skills to packaging and PILs. Commenting on the information presented, he said: “There can be an awful lot of junk at the beginning of PILs. For example, why are company details listed towards the beginning of a leaflet when what might be more important for the patient is that the medicine should not be taken with alcohol?”
Design
principles and guidelines
Look-alike
boxes present a potential for picking errors and an obvious solution would be
to use colours to highlight different strengths. However, according to
Ms.Swayne, colour differentiation needs to be approached with care. Not only
should strong colour contrasts be used, but designating a colour to a
particular strength (colour coding) is not recommended because this could lead
to the user not reading the text on a box.
Design
features can provide the basis for lengthy debates. For example, one argument
is that if all packaging is white with black lettering, people would have no
choice but to read every box carefully. The problem is that trials of drug
packaging design are few—common studies of legibility and comprehensibility
concern road traffic signs and visual display units. Although some designers
take results from such studies into account, proving that a particular feature
is beneficial can be difficult. For example, EU legislation requires that packaging
must now include the name of the medicine in Braille but, according to Karel
van der Waarde, a design consultant to the pharmaceutical industry, “it is not
known how much visually impaired patients will benefit nor how much the reading
of visually able patients will be impaired”.
More
evidence might, however, soon be available. EU legislation requires PILs to
reflect consultations with target patient groups to ensure they are legible,
clear and easy to use. This implies that industry will have to start conducting
tests. Dr. van der Waarde has performed readability studies on boxes and PILs
for industry. A typical study involves showing a leaflet or package to a small
group and asking them questions to test understanding. Results and comments are
used to modify the material, which is then tested on a larger group. A third
group is used to show that any further changes made are an improvement. Dr. van
der Waarde is, however, sceptical about the legal requirements and says that
many regulatory authorities do not have the resources to handle packaging
information properly. “They do not look at the use of packaging in a practical
context—they only see one box at a time and not several together as pharmacists
would do,” he said.
Innovations
The RCA innovation exhibition this year revealed designs for a number of innovative objects. “The popper”, by Hugo Glover, aims to help arthritis sufferers remove tablets from blister packs, and “pluspoint”, by James Cobb, is an adrenaline auto-injector that aims to overcome the fact that many patients do not carry their auto-injectors due to their prohibitive size. The aim of good design, according Roger Coleman, professor of inclusive design at the RCA, is to try to make things more user-friendly as well as safer. Surely, in a patient-centred health system, that can only be a good thing. “Information design for patient safety” is not intended to be mandatory. Rather, its purpose is to create a basic design standard and to stimulate innovation. The challenge for the pharmaceutical industry, as a whole, is to adopt such a standard.
IELTS Reading Recent Actual Test 08
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1
The Connection Between Culture and Thought
A.
The world’s population has surpassed 7
billion and continues to grow. Across the globe, humans have many differences.
These differences can be influenced by factors such as geography, climate,
politics, nationality, and many more. Culture is one such aspect that can
change the way people behave.
B.
Your culture may influence your clothing,
your language, and many aspects of your life. But is culture influential enough
to change the way an individual thinks? It has long been believed that people
from different cultures would think differently. For example, a young boy from
a farm would talk about cows while a boy from New York will talk about cars. If
two young children from different countries are asked about their thoughts
about a painting, they would answer differently because of their cultural
backgrounds.
C.
In recent years, there has been new
research that changed this long-held belief; However, this new research is not
the first to explore the idea that culture can change the way we think. Earlier
research has provided valuable insight to the question. One of the earliest
research projects was carried out in the Soviet Union. This project was
designed to find out whether culture would affect peopled way of thought
processing. The researchers focused on how living environment and nationality
might influence how people think. The experiment led by Bessett aimed to
question such awareness of cognitive psychology. Bessett conducted several
versions of the experiment to test different cognitive processes.
D.
One experiment led by Bessett and Masuku
showed an animated video picturing a big fish swimming among smaller fish and
other sea creatures. Subjects were asked to describe the scene. The Japanese
participants tended to focus on the aquatic background, such as the plants and
colour of the water, as well as the relationship between the big and small
fish. American participants tended to focus on individual fishes, mainly the
larger, more unique looking fish. The experiment suggested that members of
Eastern cultures focus more on the overall picture, while members of Western
culture focus more on the individuals.
E.
In another experiment performed by Bessett
and Choi, the subjects were presented with some very convincing evidence for a
position. Both the Korean and the American showed strong support. And after
they were given some evidence opposing the position, the Korean started to
modified or decreased their support. However, the American began to give more
support to the former argument. This project suggested that in Korean culture,
support for arguments is based on context. Ideas and conclusions are changeable
and flexible, so an individual may be more willing to change his or her mind.
For Americans, they were less willing to change their original conclusion.
F.
Bessett and Ara devised an experiment to
test the thought processing of both oriental and occidental worlds. Test
subject was given an argument “All animals with furs hibernate. Rabbit has fur.
Therefore, rabbit hibernate”. People from the eastern world questioned the
argument as not being logical, because in their knowledge some furry animals
just don’t hibernate. But the American think the statement is right. They
assume the logic deduction is based on a correct argument, thus the conclusion
is right since the logic is right.
G.
From these early experiments in the Soviet
Union, one might conclude that our original premise— that culture can impact
the way we think—was still correct. However, recent research criticises this
view, as well as Bessett’s early experiments. Though these experiments changed
the original belief on thought processing, how much does it result from all
factors needs further discussion. Fischer thinks Bessett’s experiments provide
valuable information because his research only provides qualitative
descriptions, not results from controlled environment. Chang partly agrees with
him, because there are some social factors that might influence the results.
H.
Another criticism of Bessett’s experiments
is that culture was studied as a sub-factor of nationality. The experiments
assumed that culture would be the same among all members of a nationality. For
example, every American that participated in the experiments could be assumed
to have the same culture. In reality, culture is much more complicated than
nationality. These early experiments did not control for other factors, such as
socioeconomic status, education, ethnicity, and regional differences in
culture. All of these factors could have a big effect on the individual’s
response.
I.
A third criticism of Bessett’s experiment
is that the content itself should have been more abstract, such as a puzzle or
an IQ test. With objective content, such as nature and animals, people from
different countries of the world might have different pre-conceived ideas about
these animals. Prior knowledge based on geographic location would further
complicate the results. A test that is more abstract, or more quantitative,
would provide a more controlled study of how cognitive processing works for
different groups of people.
J. The research on culture’s effect on cognitive processing still goes on today, and while some criticisms exist of Bessett’s early studies, the projects still provide valuable insight. It is important for future research projects to control carefully for the variables, such as culture. Something like culture is complex and difficult to define. It can also be influenced by many other variables, such as geography or education styles. When studying a variable like culture, it is critical that the researcher create a clear definition for what is—and what is not—considered culture.
K. Another important aspect of modern research is the ethical impact of the research. A researcher must consider carefully whether the results of the research will negatively impact any of the groups involved. In an increasingly globalised job economy, generalisations made about nationalities can be harmful to prospective employees. This information could also impact the way tests and university admissions standards are designed, which would potentially favor one group or create a disadvantage for another. When conducting any research about culture and nationality, researchers should consider all possible effects, positive or negative, that their conclusions may have when published for the world to see.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-26 which are based on Reading Passage 2
How Well Do We Concentrate?
A
Do you
read while listening to music? Do you like to watch TV while finishing your homework?
People who have these kinds of habits are called multi-taskers. Multitaskers
are able to complete two tasks at the same time by dividing their focus.
However, Thomas Lehman, a researcher in Psychology, believes people never
really do multiple things simultaneously. Maybe a person is reading while
listening to music, but in reality, the brain can only focus on one task.
Reading the words in a book will cause you to ignore some of the words of the
music. When people think they are accomplishing two different tasks
efficiently, what they are really doing is dividing their focus. While
listening to music, people become less able to focus on their surroundings. For
example, we all have experience of times when we talk with friends and they are
not responding properly. Maybe they are listening to someone else talk, or
maybe they are reading a text on their smart phone and don’t hear what you are
saying. Lehman called this phenomenon “email voice”
B
the
world has been changed by computers and its spin offs like smart-phones or
cellphones. Now that most individuals have a personal device, like a
smart-phone or a laptop, they are frequently reading, watching or listening to
virtual information. This raises the occurrence of multitasking in our day to
day life. Now when you work, you work with your typewriter, your cellphone, and
some colleagues who may drop by at any time to speak with you. In professional
meetings, when one normally focuses and listens to one another, people are more
likely to have a cell phone in their lap, reading or communicating silently
with more people than ever, liven inventions such as the cordless phone has
increased multitasking. In the old days, a traditional wall phone would ring,
and then the housewife would have to stop her activities to answer it. When it
rang, the housewife will sit down with her legs up. and chat, with no laundry
or sweeping or answering the door. In the modern era, our technology is
convenient enough to not interrupt our daily tasks.
C
Earl
Miller, an expert at the Massachusetts Institute of Technology, studied the
prefrontal cortex, which controls the brain while a person is multitasking.
According to his studies, the size of this cortex varies between species, He
found that for humans, the size of this part constitutes one third of the
brain, while it is only 4 to 5 percent in dogs, and about 15% in monkeys. Given
that this cortex is larger on a human, it allows a human to be more flexible
and accurate in his or her multitasking.. However, Miller wanted to look
further into whether the cortex was truly processing information about two
different tasks simultaneously. He designed an experiment where he
presents visual stimulants to his subjects in a wax that mimics multi-tasking.
Miller then attached sensors to the patients ” heads to pick up the electric
patterns of the brain. This sensor would show if ” the brain particles, called
neurons, were truly processing two different tasks. What he found is that the
brain neurons only lit up in singular areas one at a time, and never
simultaneously.
D
Davis
Meyer, a professor of University of Michigan, studied the young adults in a
similar experiment. He instructed them to simultaneously do math problems and
classify simple words into different categories. For this experiment. Meyer
found that when you think you are doing several jobs at the same time, you are
actually switching between jobs. Even though the people tried to do the tasks
at the same time, and both tasks were eventually accomplished, overall, the
task look more time than if the person focused on a single task one at a time.
E
People sacrifice efficiency when multitasking, Gloria Mark set office workers as his subjects. He found that they were constantly multitasking. He observed that nearly every 11 minutes people at work were disrupted. He found that doing different jobs at the same time may actually save time. However, despite the fact that they are faster, it does not mean they are more efficient. And we are equally likely to self-interrupt as be interrupted by outside sources. He found that in office nearly every 12 minutes an employee would stop and with no reason at all, cheek a website on their computer, call someone or write an email. If they concentrated for more than 20 minutes, they would feel distressed. He suggested that the average person may suffer from a short concentration span. This short attention span might be natural, but others suggest that new technology may be the problem. With cellphones and computers at our sides at all times, people will never run out of distractions. The format of media, such as advertisements, music, news articles and TV shows are also shortening, so people are used to paying attention to information for a very short time
F
So even though focusing on one single task is the most efficient way for our brains to work, it is not practical to use this method in real life. According to human nature, people feel more comfortable and efficient in environments with a variety of tasks, Edward Hallowell said that people are losing a lot of efficiency in the workplace due to multitasking, outside distractions and self-distractions. As it matter of fact, the changes made to the workplace do not have to be dramatic. No one is suggesting we ban e-mail or make employees focus on only one task. However, certain common workplace tasks, such as group meetings, would be more efficient if we banned cell-phones, a common distraction. A person can also apply these tips to prevent self-distraction. Instead of arriving to your office and checking all of your e-mails for new tasks, a common workplace ritual, a person could dedicate an hour to a single task first thing in the morning. Self-timing is a great way to reduce distraction and efficiently finish tasks one by one, instead of slowing ourselves down with multi-tasking.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3 below.
Robert Louis Stevenson
A Scottish
novelist, poet, essayist, and travel writer, Robert Louis Stevenson was born at
8 Howard Place, Edinburgh, Scotland, on 13 November 1850. It has been more than
100 years since his death. Stevenson was a writer who caused conflicting
opinions about his works. On one hand, he was often highly praised for his
expert prose and style by many English-language critics. On the other hand,
others criticised the religious themes in his works, often misunderstanding
Stevenson’s own religious beliefs. Since his death a century before, critics
and biographers have disagreed on the legacy of Stevenson’s writing. Two
biographers, KF and CP , wrote a biography about Stevenson with a clear focus.
They chose not to criticise aspects of Stevenson’s personal life. Instead, they
focused on his writing, and gave high praise to his writing style and skill.
The
literary pendulum has swung these days. Different critics have different
opinions towards Robert Louis Stevenson’s works. Though today, Stevenson is one
of the most translated authors in the world, his works have sustained a wide
variety of negative criticism throughout his life. It was like a complete
reversal of polarity—from highly positive to slightly less positive to clearly
negative; after being highly praised as a great writer, he became an example of
an author with corrupt ethics and lack of moral. Many literary critics passed
his works off as children’s stories or horror stories, and thought to have
little social value in an educational setting. Stevenson’s works were often
excluded from literature curriculum because of its controversial nature. These
debates remain, and many critics still assert that despite his skill, his
literary works still lack moral value.
One of
the main reasons why Stevenson’s literary works attracted so much criticism was
due to the genre of his writing. Stevenson mainly wrote adventure stories,
which was part of a popular and entertaining writing fad at the time. Many of
us believe adventure stories are exciting, offers engaging characters, action,
and mystery but ultimately can’t teach moral principles. The plot points are
one-dimensional and rarely offer a deeper moral meaning, instead focusing on
exciting and shocking plot twists and thrilling events. His works were even
criticised by fellow authors. Though Stevenson’s works have deeply influenced
Oscar Wilde, Wilde often joked that Stevenson would have written better works
if he wasn’t born in Scotland. Other authors came to Stevenson’s defence,
including Galsworthy who claimed that Stevenson is a greater writer than Thomas
Hardy.
Despite
Wilde’s criticism, Stevenson’s Scottish identity was an integral part of his
written works. Although Stevenson’s works were not popular in Scotland when he
was alive, many modern Scottish literary critics claim that Sir Walter Scott
and Robert Louis Stevenson are the most influential writers in the history of
Scotland. While many critics exalt Sir Walter Scott as a literary genius
because of his technical ability, others argue that Stevenson deserves the same
recognition for his natural ability to capture stories and characters in words.
Many of Scott’s works were taken more seriously as literature for their depth
due to their tragic themes, but fans of Stevenson praise his unique style of
story-telling and capture of human nature. Stevenson’s works, unlike other
British authors, captured the unique day to day life of average Scottish
people. Many literary critics point to this as a flaw of his works. According
to the critics, truly important literature should transcend local culture and
stories. However, many critics praise the local taste of his literature. To
this day, Stevenson’s works provide valuable insight to life in Scotland during
the 19th century.
Despite much debate of Stevenson’s writing topics, his writing was not the only source of attention for critics. Stevenson’s personal life often attracted a lot of attention from his fans and critics alike. Some even argue that his personal life eventually outshone his writing. Stevenson had been plagued with health problems his whole life, and often had to live in much warmer climates than the cold, dreary weather of Scotland in order to recover. So he took his family to a south pacific island Samoa, which was a controversial decision at that time. However, Stevenson did not regret the decision. The sea air and thrill of adventure complimented the themes of his writing, and for a time restored his health. From there, Stevenson gained a love of travelling, and for nearly three years he wandered the eastern and central Pacific. Much of his works reflected this love of travel and adventure that Stevenson experienced in the Pacific islands. It was as a result of this biographical attention that the feeling grew that interest in Stevenson’s life had taken the place of interest in his works. Whether critics focus on his writing subjects, his religious beliefs, or his eccentric lifestyle of travel and adventure, people from the past and present have different opinions about Stevenson as an author. Today, he remains a controversial yet widely popular figure in Western literature.
IELTS Reading Recent Actual Test 09
READING PASSAGE 1
You
should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1 below.
What the Managers Really Do?
When
students graduate and first enter the workforce, the most common choice is to
find an entry-level position. This can be a job such as an unpaid internship,
an assistant, a secretary, or a junior partner position. Traditionally, we
start with simpler jobs and work our way up. Young professionals start out with
a plan to become senior partners, associates, or even managers of a workplace.
However, these promotions can be few and far between, leaving many young
professionals unfamiliar with management experience. An important step is
understanding the role and responsibilities of a person in a managing position.
Managers are organisational members who are responsible for the work
performance of other organisational members. Managers have formal authority to
use organisational resources and to make decisions. Managers at different
levels of the organisation engage in different amounts of time on the four
managerial functions of planning, organising, leading, and controlling.
However,
as many professionals already know, managing styles can be very different
depending on where you work. Some managing styles are strictly hierarchical.
Other managing styles can be more casual and relaxed, where the manager may act
more like a team member rather than a strict boss. Many researchers have
created a more scientific approach in studying these different approaches to
managing. In the 1960s, researcher Henry Mintzberg created a seminal
organisational model using three categories. These categories represent three
major functional approaches, which are designated as interpersonal,
informational and decisional.
Introduced
Category 1: INTERPERSONAL ROLES. Interpersonal roles require managers to direct
and supervise employees and the organisation. The figurehead is typically a top
of middle manager. This manager may communicate future organisational goals or
ethical guidelines to employees at company meetings. They also attend
ribbon-cutting ceremonies, host receptions, presentations and other activities
associated with the figurehead role. A leader acts as an example for other
employees to follow, gives commands and directions to subordinates, makes
decisions, and mobilises employee support. They are also responsible for the
selection and training of employees. Managers must be leaders at all levels of
the organisation; often lower-level managers look to top management for this
leadership example. In the role of liaison, a manager must coordinate the work
of others in different work units, establish alliances between others, and work
to share resources. This role is particularly critical for middle managers, who
must often compete with other managers for important resources, yet must
maintain successful working relationships with them for long time periods.
Introduced
Category 2: INFORMATIONAL ROLES. Informational roles are those in which
managers obtain and transmit information. These roles have changed dramatically
as technology has improved. The monitor evaluates the performance of others and
takes corrective action to improve
that
performance. Monitors also watch for changes in the environment and within the
company that may affect individual and organisational performance. Monitoring
occurs at all levels of management. The role of disseminator requires that
managers inform employees of changes that affect them and the organisation.
They also communicate the company’s vision and purpose.
Introduced
Category 3: DECISIONAL ROLES. Decisional roles require managers to plan
strategy and utilise resources. There are four specific roles that are
decisional. The entrepreneur role requires the manager to assign resources to
develop innovative goods and services, or to expand a business. The disturbance
handler corrects unanticipated problems facing the organisation from the
internal or external environment. The third decisional role, that of resource
allocator, involves determining which work units will get which resources. Top
managers are likely to make large, overall budget decisions, while middle
managers may make more specific allocations. Finally, the negotiator works with
others, such as suppliers, distributors, or labor unions, to reach agreements
regarding products and services.
Although
Mintzberg’s initial research in 1960s helped categorise manager approaches,
Mintzberg was still concerned about research involving other roles in the
workplace. Minstzberg considered expanding his research to other roles, such as
the role of disseminator, figurehead, liaison and spokesperson. Each role would
have different special characteristics, and a new categorisation system would
have to be made for each role to understand it properly.
While
Mintzberg’s initial research was helpful in starting the conversation, there
has since been criticism of his methods from other researchers. Some criticisms
of the work were that even though there were multiple categories, the role of
manager is still more complex. There are still many manager roles that are not
as traditional and are not captured in Mintzberg’s original three categories.
In addition, sometimes, Mintzberg’s research was not always effective. The
research, when applied to real-life situations, did not always improve the
management process in real-life practice.
These two criticisms against Mintzberg’s research method raised some questions about whether or not the research was useful to how we understand “managers” in today’s world. However, even if the criticisms against Mintzberg’s work are true, it does not mean that the original research from the 1960s is completely useless. Those researchers did not say Mintzberg’s research is invalid. His research has two positive functions to the further research.
The
first positive function is Mintzberg provided a useful functional approach to
analyse management. And he used this approach to provide a clear concept of the
role of manager to the researcher. When researching human behavior, it is
important to be concise about the subject of the research. Mintzberg’s research
has helped other researchers clearly define what a “manager” is, because in
real-life situations, the “manager” is not always the same position title.
Mintzberg’s definitions added clarity and precision to future research on the
topic.
The second positive function is Mintzberg’s research could be regarded as a good beginning to give a new insight to further research on this field in the future. Scientific research is always a gradual process. Just because Mintzberg’s initial research had certain flaws, does not mean it is useless to other researchers. Researchers who are interested in studying the workplace in a systematic way have older research to look back on. A researcher doesn’t have to start from the very beginning— older research like Mintzberg’s have shown what methods work well and what methods are not as appropriate for workplace dynamics. As more young professionals enter the job market, this research will continue to study and change the way we think about the modern workplace.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-26 which are based on Reading Passage 2 below.
Keep the Water Away
A.
Last winter’s floods on the rivers of
central Europe were among the worst since the Middle Ages, and as winter storms
return, the spectre of floods is returning too. Just weeks ago, the river Rhone
in south-east France burst its banks, driving 15,000 people from their homes,
and worse could be on the way. Traditionally, river engineers have gone for
Plan A: get rid of the water fast, draining it off the land and down to the sea
in tall-sided rivers re-engineered as high-performance drains. But however big
they dug city drains, however wide and straight they made the rivers, and
however high they built the banks, the floods kept coming back to taunt them,
from the Mississippi to the Danube. Arid when the floods came, they seemed to
be worse than ever. No wonder engineers are turning to Plan B: sap the water’s
destructive strength by dispersing it into fields, forgotten lakes, flood
plains and aquifers.
B.
Back in the days when rivers took a more
tortuous path to the sea, flood waters lost impetus and volume while meandering
across flood plains and idling through wetlands and inland deltas. But today
the water tends to have an unimpeded journey to the sea. And this means that
when it rains in the uplands, the water comes down all at once. Worse, whenever
we close off more flood plains, the river’s flow farther downstream becomes
more violent and uncontrollable. Dykes are only as good as their weakest
link—-and the water will unerringly find it. By trying to turn the complex
hydrology of rivers into the simple mechanics of a water pipe, engineers have
often created danger where they promised safety, and intensified the floods
they meant to end. Take the Rhine, Europe’s most engineered river. For two
centuries, German engineers have erased its backwaters and cut it off from its
flood plain.
C.
Today, the river has lost 7 percent of its
original length and runs up to a third faster. When it rains hard in the Alps,
the peak flows from several tributaries coincide in the main river, where once
they arrived separately. And with four-fifths of the lower Rhine’s flood plain
barricaded off, the waters rise ever higher. The result is more frequent
flooding that does ever-greater damage to the homes, offices and roads that sit
on the flood plain. Much the same has happened in the US on the mighty
Mississippi, which drains the world’s second largest river catchment into the
Gulf of Mexico.
D.
The European Union is trying to improve
rain forecasts and more accurately model how intense rains swell rivers. That
may help cities prepare, but it won’t stop the floods. To do that, say
hydrologists, you need a new approach to engineering not just rivers, but the
whole landscape. The UK’s Environment Agency -which has been granted an extra
£150 million a year to spend in the wake of floods in 2000 that cost the
country £1 billion- puts it like this: “The focus is now on working with the
forces of nature. Towering concrete walks are out, and new wetlands : are in.”
To help keep London’s feet dry, the agency is breaking the Thames’s banks
upstream and reflooding 10 square kilometres of ancient flood plain at Otmoor
outside Oxford. Nearer to London it has spent £100 million creating new
wetlands and a relief channel across 16 kilometres of flood plain to protect
the town of Maidenhead, as well as the ancient playing fields of Eton College.
And near the south coast, the agency is digging out channels to reconnect old
meanders on the river Cuckmere in East Sussex that were cut off by flood banks
150 years ago.
E.
The same is taking place on a much grander
scale in Austria, in one of Europe’s largest river restorations to date.
Engineers are regenerating flood plains along 60 kilometres of the river Drava
as it exits the Alps. They are also widening the river bed and channelling it
back into abandoned meanders, oxbow lakes and backwaters overhung with willows.
The engineers calculate that the restored flood plain can now store up to 10
million cubic metres of flood waters and slow storm surges coming out of the Alps
by more than an hour, protecting towns as far downstream as Slovenia and
Croatia.
F.
“Rivers have to be allowed to take more
space. They have to be turned from flood-chutes into flood-foilers,” says
Nienhuis. And the Dutch, for whom preventing floods is a matter of survival,
have gone furthest. A nation built largely on drained marshes and seabed had
the fright of its life in 1993 when the Rhine almost overwhelmed it. The same
happened again in 1995, when a quarter of a million people were evacuated from
the Netherlands. But a new breed of “soft engineers” wants our cities to become
porous, and Berlin is their shining example. Since reunification, the city’s
massive redevelopment has been governed by tough new rules to prevent its
drains becoming overloaded after heavy rains. Harald Kraft, an architect
working in the city, says: “We now see rainwater as a resource to be kept
rather than got rid of at great cost.” A good illustration is the giant
Potsdamer Platz, a huge new commercial redevelopment by Daimler Chrysler in the
heart of the city.
G. Los Angeles has spent billions of dollars digging huge drains and concreting river beds to carry away the water from occasional intense storms. The latest plan is to spend a cool $280 million raising the concrete walls on the Los Angeles river by another 2 metres. Yet many communities still flood regularly. Meanwhile this desert city is shipping in water from hundreds of kilometres away in northern California and from the Colorado river in Arizona to fill its taps and swimming pools, and irrigate its green spaces. It all sounds like bad planning. “In LA we receive half the water we need in rainfall, and we throw it away. Then we spend hundreds of millions to import water,” says Andy Lipkis, an LA environmentalist, along with citizen groups like Friends of the Los Angeles River and Unpaved LA, want to beat the urban flood hazard and fill the taps by holding onto the city’s flood water. And it’s not just a pipe dream. The authorities this year launched a $100 million scheme to road-test the porous city in one flood-hit community in Sun Valley. The plan is to catch the rain that falls on thousands of driveways, parking lots and rooftops in the valley. Trees will soak up water from parking lots. Homes and public buildings will capture roof water to irrigate gardens and parks. And road drains will empty into old gravel pits and other leaky places that should recharge the city’s underground water reserves. Result: less flooding and more water for the city. Plan B says every city should be porous, every river should have room to flood naturally and every coastline should be left to build its own defences. It sounds expensive and utopian, until you realise how much we spend trying to drain cities and protect our watery margins -and how bad we are at it.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3 below.
The future of the World’s Language
Of the
world’s 6,500 living languages, around half are expected to the out by the end
of this century, according to UNESCO. Just 11 are spoken by more than half of
the earth’s population, so it is little wonder that those used by only a few
are being left behind as we become a more homogenous, global society. In short,
95 percent of the world’s languages are spoken by only five percent of its
population—a remarkable level of linguistic diversity stored in tiny pockets of
speakers around the world. Mark Turin, a university professor, has launched
WOLP (World Oral Language Project) to prevent the language from the brink of
extinction.
He is
trying to encourage indigenous communities to collaborate with anthropologists
around the world to record what he calls “oral literature” through video
cameras, voice recorders and other multimedia tools by awarding grants from a
£30,000 pot that the project has secured this year. The idea is to collate this
literature in a digital archive that can be accessed on demand and will make
the nuts and bolts of lost cultures readily available.
For
many of these communities, the oral tradition is at the heart of their culture.
The stories they tell are creative as well as communicative. Unlike the
languages with celebrated written traditions, such as Sanskrit, Hebrew and
Ancient Greek, few indigenous communities have recorded their own languages or
ever had them recorded until now.
The
project suggested itself when Turin was teaching in Nepal. He wanted to study
for a PhD in endangered languages and, while discussing it with his professor
at Leiden University in the Netherlands, was drawn to a map on his tutor’s
wall. The map was full of pins of a variety of colours which represented all
the world’s languages that were completely undocumented. At random, Turin chose
a “pin” to document. It happened to belong to the Thangmi tribe, an indigenous
community in the hills east of Kathmandu, the capital of Nepal. “Many of the
choices anthropologists and linguists who work on these traditional field-work
projects are quite random,” he admits.
Continuing
his work with the Thangmi community in the 1990s, Turin began to record the
language he was hearing, realising that not only was this language and its
culture entirely undocumented, it was known to few outside the tiny community.
He set about trying to record their language and myth of origins. “I wrote
1,000 pages of grammar in English that nobody could use—but I realised that
wasn’t enough. It wasn’t enough for me, it wasn’t enough for them. It simply
wasn’t going to work as something for the community. So then I produced this
trilingual word list in Thangmi, Nepali and English.”
In
short, it was the first ever publication of that language. That small
dictionary is still sold in local schools for a modest 20 rupees, and used as
part of a wider cultural regeneration process to educate children about their
heritage and language. The task is no small undertaking: Nepal itself is a
country of massive ethnic and linguistic diversity, home to 100 languages from
four different language families. What’s more, even fewer ethnic Thangmi speak
the Thangmi language. Many of the community members have taken to speaking
Nepali, the national language taught in schools and spread through the media,
and community elders are dying without passing on their knowledge.
Despite
Turin’s enthusiasm for his subject, he is baffled by many linguists’ refusal to
engage in the issue he is working on. “Of the 6,500 languages spoken on Earth,
many do not have written traditions and many of these spoken forms are
endangered,” he says. “There are more linguists in universities around the
world than there are spoken languages—but most of them aren’t working on this
issue. To me it’s amazing that in this day and age, we still have an entirely
incomplete image of the world’s linguistic diversity. People do PhDs on the
apostrophe in French, yet we still don’t know how many languages are spoken.”
“When a
language becomes endangered, so too does a cultural world view. We want to
engage with indigenous people to document their myths and folklore, which can
be harder to find funding for if you are based outside Western universities.”
Yet,
despite the struggles facing initiatives such as the World Oral Literature
Project, there are historical examples that point to the possibility that
language restoration is no mere academic pipe dream. The revival of a modern
form of Hebrew in the 19th century is often cited as one of the best proofs
that languages long dead, belonging to small communities, can be resurrected
and embraced by a large number of people. By the 20th century, Hebrew was well
on its way to becoming the main language of the Jewish population of both
Ottoman and British Palestine. It is now spoken by more than seven million
people in Israel.
Yet,
despite the difficulties these communities face in saving their languages, Dr
Turin believes that the fate of the world’s endangered languages is not sealed,
and globalisation is not necessarily the nefarious perpetrator of evil it is
often presented to be. “I call it the globalisation paradox: on the one hand
globalisation and rapid socio-economic change are the things that are eroding
and challenging diversity But on the other, globalisation is providing us with
new and very exciting tools and facilities to get to places to document those
things that globalisation is eroding. Also, the communities at the coal-face of
change are excited by what globalisation has to offer.”
In the meantime, the race is on to collect and protect as many of the languages as possible, so that the Rai Shaman in eastern Nepal and those in the generations that follow him can continue their traditions and have a sense of identity. And it certainly is a race: Turin knows his project’s limits and believes it inevitable that a large number of those languages will disappear. “We have to be wholly realistic. A project like ours is in no position, and was not designed, to keep languages alive. The only people who can help languages survive are the people in those communities themselves. They need to be reminded that it’s good to speak their own language and I think we can help them do that—becoming modem doesn’t mean you have to lose your language.”
IELTS Reading Recent Actual Test 10
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1 below.
Radiocarbon Dating – The Profile of Nancy Athfield
Have
you ever picked up a small stone off the ground and wondered how old it was?
Chances are, that stone has been around many more years than your own lifetime.
Many scientists share this curiosity about the age of inanimate objects like
rocks, fossils and precious stones. Knowing how old an object is can provide
valuable information about our prehistoric past. In most societies, human
beings have kept track of history through writing. However, scientists are
still curious about the world before writing, or even the world before humans.
Studying the age of objects is our best way to piece together histories of our
pre-historic past. One such method of finding the age of an object is called
radiocarbon dating. This method can find the age of any object based on the
kind of particles and atoms that are found inside of the object. Depending on
what elements the object is composed of, radiocarbon can be a reliable way to
find an object’s age. One famous specialist in this method is the researcher
Nancy Athfield. Athfield studied the ancient remains found in the country of
Cambodia. Many prehistoric remains were discovered by the local people of
Cambodia. These objects were thought to belong to some of the original groups
of humans that first came to the country of Cambodia. The remains had never
been scientifically studied, so Nancy was greatly intrigued by the opportunity
to use modern methods to discover the true age of these ancient objects.
Athfield
had this unique opportunity because her team, comprised of scientists and
filmmakers, were in Cambodia working on a documentary. The team was trying to
discover evidence to prove a controversial claim in history: that Cambodia was
the resting place for the famous royal family of Angkor. At that time, written
records and historic accounts conflicted on the true resting place. Many people
across the world disagreed over where the final resting place was. For the
first time, Athfield and her team had a chance to use radiocarbon dating to
find new evidence. They had a chance to solve the historic mystery that many
had been arguing over for years.
Athfield
and her team conducted radiocarbon dating of many of the ancient objects found
in the historic site of Angkor Wat. Nancy found the history of Angkor went back
to as early as 1620. According to historic records, the remains of the Angkor
royal family were much younger than that, so this evidence cast a lot of doubt
as to the status of the ancient remains. The lesearch ultimately raised more
questions. If the remains were not of the royal family, then whose remains were
being kept in the ancient site? Athfield’s team left Cambodia with more questions
unanswered. Since Athfield’s team studied the remains, new remains have been
unearthed at the ancient site of Angkor Wat, so it is possible that these new
remains could be the true remains of the royal family. Nancy wished to come
back to continue her research one day.
In her
early years, the career of Athfield was very unconventional. She didn’t start
her career as a scientist. At the beginning, she would take any kind of job to
pay her bills. Most of them were low-paying jobs or brief Community service
opportunities. She worked often but didn’t know what path she would ultimately
take. But eventually, her friend suggested that Athfield invest in getting a
degree. The friend recommended that Athfield attend a nearby university. Though
doubtful of her own qualifications, she applied and was eventually accepted by
the school. It was there that she met Willard Libby, the inventor of
radiocarbon dating. She took his class and soon had the opportunity to complete
hands-on research. She soon realised that science was her passion. After
graduation, she quickly found a job in a research institution.
After
college, Athfield’s career in science blossomed. She eventually married, and
her husband landed a job at the prestigious organisation GNN. Athfield joined
her husband in the same organisation, and she became a lab manager in the
institution. She earned her PhD in scientific research, and completed her
studies on a kind of rat when it first appeared in New Zealand. There, she
created original research and found many flaws in the methods being used in New
Zealand laboratories. Her research showed that the subject’s diet led to the
fault in the earlier research. She was seen as an expert by her peers in New
Zealand, and her opinion and expertise were widely respected. She had come a
long way from her old days of working odd jobs. It seemed that Athfield’s
career was finally taking off.
But
Athfield’s interest in scientific laboratories wasn’t her only interest. She
didn’t settle down in New Zealand. Instead, she expanded her areas of
expertise. Athfield eventually joined the field of Anthropology, the study of
human societies, and became a well-qualified archaeologist. It was during her
blossoming career as an archaeologist that Athfield became involved with the
famous Cambodia project. Even as the filmmakers ran out of funding and left
Cambodia, Athfield continued to stay and continue her research.
In 2003, the film was finished in uncertain conclusions, but Nancy continued her research on the ancient ruins of Angkor Wat. This research was not always easy. Her research was often delayed by lack of funding, and government paperwork. Despite her struggles, she committed to finishing her research. Finally, she made a breakthrough. Using radiocarbon dating, Athfield completed a database for the materials found in Cambodia. As a newcomer to Cambodia, she lacked a complete knowledge of Cambodian geology, which made this feat even more difficult. Through steady determination and ingenuity, Athfield finally completed the database. Though many did not believe she could finish, her research now remains an influential and tremendous contribution to geological sciences in Cambodia. In the future, radiocarbon dating continues to be a valuable research skill. Athfield will be remembered as one of the first to bring this scientific method to the study of the ancient ruins of Angkor Wat.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-26 which are based on Reading Passage 2 below.
Are Artists Liars?
A.
Shortly before his death, Marlon Brando
was working on a series of instructional videos about acting, to he called
“Lying for a Iiving”. On the surviving footage, Brando can he seen dispensing
gnomic advice on his craft to a group of enthusiastic, if somewhat bemused, Hollywood
stars, including Leonardo Di Caprio and Sean Penn. Brando also recruited random
people from the Los Angeles street and persuaded them to improvise (the footage
is said to include a memorable scene featuring two dwarves and a giant Samoan).
“If you can lie, you can act.” Brando told Jod Kaftan, a writer for Rolling
Stone and one of the few people to have viewed the footage. “Are you good at
lying?” asked Kaftan. “Jesus.” said Brando, “I’m fabulous at it”.
B.
Brando was not the first person to note
that the line between an artist and a liar is a line one. If art is a kind of
lying, then lying is a form of art, albeit of a lower order-as Oscar Wilde and
Mark Twain have observed. Indeed, lying and artistic storytelling spring from a
common neurological root-one that is exposed in the cases of psychiatric
patients who suffer from a particular kind of impairment. Both liars and
artists refuse to accept the tyranny of reality. Both carefully craft stories
that are worthy of belief – a skill requiring intellectual sophistication,
emotional sensitivity and physical self-control (liars are writers and
performers of their own work). Such parallels are hardly coincidental, as I
discovered while researching my book on lying.
C.
A case study published in 1985 by Antonio
Damasio, a neurologist, tells the story of a middle-aged woman with brain
damage caused by a series of strokes. She retained cognitive abilities,
including coherent speech, but what she actually said was rather unpredictable.
Checking her knowledge of contemporary events, Damasio asked her about the
Falklands War. In the language of psychiatry, this woman was “confabulating”.
Chronic confabulation is a rare type of memory problem that affects a small
proportion of brain damaged people. In the literature it is defined as “the
production of fabricated, distorted or misinterpreted memories about oneself or
the world, without the conscious intention to deceive”. Whereas amnesiacs make
errors of omission, there are gaps in their recollections they find impossible to
fill – confabulators make errors of commission: they make tilings up. Rather
than forgetting, they are inventing. Confabulating patients are nearly always
oblivious to their own condition, and will earnestly give absurdly implausible
explanations of why they’re in hospital, or talking to a doctor. One patient,
asked about his surgical sear, explained that during the Second World War he
surprised a teenage girl who shot him three times in the head, killing him,
only for surgery to bring him back to life. The same patient, when asked about
his family, described how at various times they had died in his arms, or had
been killed before his eyes. Others tell yet more fantastical tales, about
trips to the moon, fighting alongside Alexander in India or seeing Jesus on the
Cross. Confabulators aren’t out to deceive. They engage in what Morris
Moseovitch, a neuropsychologist, calls “honest lying”. Uncertain and obscurely
distressed by their uncertainty, they are seized by a “compulsion to narrate”:
a deep-seated need to shape, order and explain what they do not understand.
Chronic confabulators are often highly inventive at the verbal level, jamming
together words in nonsensical but suggestive ways: one patient, when asked what
happened to Queen Marie Antoinette of France, answered that she had been
“suicided” by her family. In a sense, these patients are like novelists, as
described by Henry James: people on whom “nothing is wasted”. Unlike writers,
however, they have little or no control over their own material.
D.
The wider significance of this condition
is what it tells us about ourselves. Evidently, there is a gushing river of
verbal creativity in the normal human mind, from which both artistic invention
and lying are drawn. We are born storytellers, spinning, narrative out of our
experience and imagination, straining against the leash that keeps us tethered
to reality. This is a wonderful thing; it is what gives us out ability to
conceive of alternative futures and different worlds. And it helps us to
understand our own lives through the entertaining stories of others. But it can
lead us into trouble, particularly when we try to persuade others that our
inventions are real. Most of the time, as our stories bubble up to
consciousness, we exercise our cerebral censors, controlling which stories we
tell, and to whom. Yet people lie for all sorts of reasons, including the fact
that confabulating can be dangerously fun.
E. During a now-famous libel case in 1996, Jonathan Aitken, a former cabinet minister, recounted a tale to illustrate the horrors he endured after a national newspaper tainted his name. The case, which stretched on for more than two years, involved a series of claims made by the Guardian about Aitken’s relationships with Saudi arms dealers, including meetings he allegedly held with them on a trip to Paris while he was a government minister. Whitt amazed many in hindsight was the sheer superfluity of the lies Aitken told during his testimony. Aitken’s case collapsed in June 1997, when the defence finally found indisputable evidence about his Paris trip. Until then, Aitken’s charm, fluency and flair for theatrical displays of sincerity looked as if they might bring him victory, they revealed that not only was Aitken’s daughter not with him that day (when he was indeed doorstepped), but also that the minister had simply got into his car and drove off, with no vehicle in pursuit.
F. Of course, unlike Aitken, actors, playwrights and novelists are not literally attempting to deceive us, because the rules are laid out in advance: come to the theatre, or open this book, and we’ll lie to you. Perhaps this is why we fell it necessary to invent art in the first place: as a safe space into which our lies can be corralled, and channeled into something socially useful. Given the universal compulsion to tell stories, art is the best way to refine and enjoy the particularly outlandish or insight till ones. But that is not the whole story. The key way in which artistic “lies” differ from normal lies, and from the “honest lying” of chronic confabulators, is that they have a meaning and resonance beyond their creator. The liar lies on behalf of himself; the artist tell lies on behalf of everyone. If writers have a compulsion to narrate, they compel themselves to find insights about the human condition. Mario Vargas Llosa has written that novels “express a curious truth that can only he expressed in a furtive and veiled fashion, masquerading as what it is not.” Art is a lie whose secret ingredient is truth.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3 below.
What is Meaning
—Why do we respond to words and symbols in the
waves we do?
The end,
product of education, yours and mine and everybody’s, is the total pattern of
reactions and possible reactions we have inside ourselves. If you did not have
within you at this moment the pattern of reactions that we call “the ability to
read.” you would see here only meaningless black marks on paper. Because of the
trained patterns of response, you are (or are not) stirred to patriotism by
martial music, your feelings of reverence are aroused by symbols of your
religion, you listen more respectfully to the health advice of someone who has
“MD” after his name than to that of someone who hasn’t. What I call here a
“pattern of reactions”, then, is the sum total of the ways we act in response
to events, to words, and to symbols.
Our
reaction patterns or our semantic habits, are the internal and most important
residue of whatever years of education or miseducation we may have received
from our parents’ conduct toward us in childhood as well as their teachings,
from the formal education we may have had, from all the lectures we have
listened to, from the radio programs and the movies and television shows we
have experienced, from all the books and newspapers and comic strips we have
read, from the conversations we have had with friends and associates, and from
all our experiences. If, as the result of all these influences that make us
what we are, our semantic habits are reasonably similar to those of most people
around us, we are regarded as “normal,” or perhaps “dull.” If our semantic
habits are noticeably different from those of others, we are regarded as
“individualistic” or “original.” or, if the differences are disapproved of or
viewed with alarm, as “crazy.”
Semantics
is sometimes defined in dictionaries as “the science of the meaning of words”—
which would not be a bad definition if people didn’t assume that the search for
the meanings of words begins and ends with looking them up in a dictionary. If
one stops to think for a moment, it is clear that to define a word, as a
dictionary does, is simply to explain the word with more words. To be thorough
about defining, we should next have to define the words used in the definition,
then define the words used in defining the words used in the definition and so
on. Defining words with more words, in short, gets us at once into what
mathematicians call an “infinite regress”. Alternatively, it can get us into
the kind of run-around we sometimes encounter when we look up “impertinence”
and find it defined as “impudence,” so we look up “impudence” and find it defined
as “impertinence.” Yet—and here we come to another common reaction
pattern—people often act as if words can be explained fully with more words. To
a person who asked for a definition of jazz, Louis Armstrong is said to have
replied, “Man. when you got to ask what it is, you’ll never get to know,”
proving himself to be an intuitive semanticist as well as a great trumpet
player.
Semantics,
then, does not deal with the “meaning of words” as that expression is commonly
understood. P. W. Bridgman, the Nobel Prize winner and physicist, once wrote,
“The true meaning of a term is to be found by observing what a man does with
it, not by what he says about it.” He made an enormous contribution to science
by showing that the meaning of a scientific term lies in the operations, the
things done, that establish its validity, rather than in verbal definitions.
Here is
a simple, everyday kind of example of “operational” definition. If you say,
“This table measures six feet in length,” you could prove it by taking a foot rule,
performing the operation of laying it end to end while counting,
“One…two…three…four…” But if you say—and revolutionists have started uprisings
with just this statement “Man is born free, but everywhere he is in
chains!”—what operations could you perform to demonstrate its accuracy or
inaccuracy?
But let us carry this suggestion of “operationalism” outside the physical sciences where Bridgman applied it, and observe what “operations” people perform as the result of both the language they use and the language other people use in communicating to them. Here is a personnel manager studying an application blank. He comes to the words “Education: Harvard University,” and drops the application blank in the wastebasket (that’s the “operation”) because, as he would say if you asked him, “I don’t like Harvard men.” This is an instance of “meaning” at work—but it is not a meaning that can be found in dictionaries.
If I
seem to be taking a long time to explain what semantics is about, it is because
I am trying, in the course of explanation, to introduce the reader to a certain
way of looking at human behavior. I say human responses because, so far as we
know, human beings are the only creatures that have, over and above that
biological equipment which we have in common with other creatures, the
additional capacity for manufacturing symbols and systems of symbols. When we
react to a flag, we are not reacting simply to a piece of cloth, but to the
meaning with which it has been symbolically endowed. When we react to a word,
we are not reacting to a set of sounds, but to the meaning with which that set
of sounds has been symbolically endowed.
A basic idea in general semantics, therefore, is that the meaning of words (or other symbols) is not in the words, but in our own semantic reactions. If I were to tell a shockingly obscene story in Arabic or Hindustani or Swahili before an audience that understood only English, no one would blush or be angry; the story would be neither shocking nor obscene-induced, it would not even be a story. Likewise, the value of a dollar bill is not in the bill, but in our social agreement to accept it as a symbol of value. If that agreement were to break down through the collapse of our government, the dollar bill would become only a scrap of paper. We do not understand a dollar bill by staring at it long and hard. We understand it by observing how people act with respect to it. We understand it by understanding the social mechanisms and the loyalties that keep it meaningful. Semantics is therefore a social study, basic to all other social studies.
IELTS Reading Recent Actual Test 11
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1 below.
The “Extinct” Grass in Britain
A.
The British grass interrupted brome was
said to be extinct, just like the Dodo. Called interrupted brome because of its
gappy seed-head, this unprepossessing grass was found nowhere else in the
world, Gardening experts from the Victorian lira were first to record it. In
the early 20th century, it grew far and wide across southern England. But it
quickly vanished and by 1972 was nowhere to be found. Even the seeds stored at
the Cambridge University Botanic Garden as an insurance policy were dead, having
been mistakenly kept at room temperature. Fans of the glass were devastated.
B.
However, reports of its decline were not
entirely correct. Interrupted brome has enjoyed a revival, one that’s not due
to science. Because of the work of one gardening enthusiast, interrupted brome
is thriving as a pot plant. The relaunching into the wild of Britain’s almost
extinct plant has excited conservationists everywhere
C.
Originally, Philip Smith didn’t know that
he had the very unusual grass at his own home. When he heard about the grass
becoming extinct, he wanted to do something surprising. He attended a meeting
of the British Botanical Society in Manchester in 1979, and seized His
opporlunity. He said that it was so disappointing to hear about the demise of
the interrupted brome. “What a pity we didn’t research it further!” he added.
Then. all of a sudden he displayed his pots with so called “extinct grass” lot
all to see.
D.
Smith had kept the seeds from the last
stronghold of the grass, Pamisford in 1963. It was then when the grass stalled
to disappear from the wild. Smith cultivated the grass, year after year.
Ultimately, it was his curiosity in the plant that saved it. not scientific or
technological projects that
E.
For now, the bromes future is guaranteed.
The seeds front Smith’s plants have beet, securely stored in the cutting edge
facilities of Millennium Seed Bank at Wakehurst Place in Sussex. And living
plants thrive at the botanic gardens at Kew, Edinburgh and Cambridge. This
year, seeds are also saved at sites all across the country and the grass now
nourishes at several public gardens too.
F.
The grass will now be reintroduced to the
British countryside. As a part of the Species Recovery Project, the
organisation English Nature will re-introduce interrupted brome into the
agricultural landscape, provided willing farmers are found. Alas, the grass is
neither beautiful not practical. it is undoubtedly a weed, a weed that nobody
cares for these days. The brome wax probably never widespread enough to annoy
farmers and today, no one would appreciate its productivity or nutritious
qualities. As a grass, it leaves a lot to be desited by agriculturalists.
G.
Smith’s research has attempted to answer
the question of where the grass came from. His research points to mutations
from other weedy grasses as the most likely source. So close is the
relationship that interrupted brome was originally deemed to he a mere variety
of soil brome by the great Victorian taxonomist Professor Hackel. A botanist
from the 19th century, Druce. had taken notes on the grass and convinced his
peers that the grass deserved its own status as a species. Despite Druce
growing up in poverty and his self-taught profession, he became the leading
botanist of his time.
H.
Where the grass came from may be clear,
but the timing of its birth may be tougher to find out. A clue lies in its
penchant for growing as a weed in fields shared with a fodder crop, in
particular nitrogen-fixing legumes such as sainfoin, lucerne or clover.
According to agricultural historian Joan Thirsk. the humble sainfoin and its
company were first noticed in Britain in the early 17th century. Seeds brought
in from the Continent were sown in pastures to feed horses and other livestock.
However, back then, only a few enthusiastic gentlemen were willing to use the
new crops for their prized horses.
I.
Not before too long though, the need to
feed the parliamentary armies in Scotland, England and behind was more pressing
than ever. farmers were forced to produce more bread, cheese and beer. And by
1650 the legumes were increasingly introduced into arable rotations, to serve
as green nature to boost grain yields. A bestseller of its day, Nathaniel
Fiennes’s Sainfoin Improved, published in 1671, helped to spread the word. With
the advent of sainfoin, clover and lucerne. Britain’s very own rogue grass had
suddenly at rivet.
J.
Although the credit for the discovery of
interrupted brome goes to a Miss A. M. Barnard, who collected the first
specimens at Odsey, Bedfordshire, in 1849, the grass had probably lurked undetected
in the English countryside for at least a hundred years. Smith thinks the
plant- the world’s version of the Dodo probably evolved in the late 17th or
early 18th century, once sainfoin became established. Due mainly to the
development of the motor car and subsequent decline of fodder crops for horses,
the brome declined rapidly over the 20th century. Today, sainfoin has almost
disappeared from the countryside, though occasionally its colourful flowers are
spotted in lowland nature reserves. More recently artificial fertilizers have
made legume rotations unnecessary
K.
The close relationship with out-of-fashion
crops spells trouble for those seeking to re-establish interrupted brome in
today’s countryside. Much like the once common arable weeds, such as the
corncockle, its seeds cannot survive long in the soil. Each spring, the brome
relied on farmers to resow its seeds; in the days before weed killers and
advanced seed sieves, an ample supply would have contaminated supplies of crop
seed. However fragile seeds are not the brome’s only problem: this species is
also unwilling to release its seeds as they ripen. According to Smith. The
grass will struggle to survive even in optimal conditions. It would be very
difficult to thrive amongst its more resilient competitors found in today’s
improved agricultural landscape
L. Nonetheless, interrupted brome’s reluctance to thrive independently may have some benefits. Any farmer willing to foster this unique contribution to the world’s flora can rest assured that the grass will never become an invasive pest. Restoring interrupted brome to its rightful home could bring other benefits too, particularly if this strange species is granted recognition as a national treasure. Thanks to British farmers, interrupted brome was given the chance to evolve in the first place. Conservationists would like to see the grass grow once again in its natural habitat and perhaps, one day, seeing the grass become a badge of honour for a new generation of environmentally conscious farmers.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-26 which are based on Reading Passage 2 below.
Implication of False Belief Experiments
A
A
considerable amount of research since the mid 1980s has been concerned with
what has been termed children’s theory of mind. This involves children’s
ability to understand that people can have different beliefs and
representations of the world– a capacity that is shown by four years of age.
Furthermore, this ability appears to be absent in children with autism. The
ability to work out that another person is thinking is clearly an important
aspect of both cognitive and social development. Furthermore, one important
explanation for autism is that children suffering from this condition do not
have a theory of mind(TOM). Consequently, the development of children’s TOM has
attracted considerable attention.
B
Wimmer
and Perner devised a ‘false belief task’ to address this question. They used
some toys to act out the following story. Maxi left some chocolate in a blue
cupboard before he went out. When he was away his mother moved the chocolate to
a green cupboard. Children were asked to predict where Maxi willlook for his
chocolate when he returns. Most children under four years gave the incorrect
answer, that Maxi will look in the green cupboard. Those over four years tended
to give the correct answer, that Maxi will look in the blue cupboard. The
incorrect answers indicated that the younger children did not understand that
Maxi’s beliefs and representations no longer matched the actual state of the
world, and they failed to appreciate that Maxi will act on the basis of his
beliefs rather than the way that the world is actually organised.
C
A
simpler version of the Maxi task was devised by Baron-Cohen to take account of
criticisms that younger children may have been affected by the complexity and
too much information of the story in the task described above. For example, the
child is shown two dolls, Sally and Anne, who have a basket and a box, respectively. Sally
also has a marble, which she places in her basket,and then leaves to take a walk. While she
is out of the room, Anne takes the marble from the basket, eventually putting
it in the box. Sally returns,and child is then asked where Sally will
look for the marble. The child passes the task if she answers that Sally will
look in the basket, where she put the marble; the child fails the task if she
answers that Sally will look in the box,where the child knows the marble is hidden, even though Sally
cannot know, since she did not see it hidden there. In order to pass the task,
the child must be able to understand that another’s mental representation of
the situation is different from their own, and the child must be able to
predict behavior based on that understanding. The results of research using
false-belief tasks have been fairly consistent: most normally-developing
children are unable to pass the tasks until around age four.
D
Leslie
argues that, before 18 months, children treat the world in a literal way and
rarely demonstrate pretence. He also argues that it is necessary for the
cognitive system to distinguish between what is pretend and what is real. If
children were not able to do this, they would not be able to distinguish
between imagination and reality. Leslie suggested that this pretend play
becomes possible because of the presence of a de-coupler that copies primary
representations to secondary representations. For example, children, when
pretending a banana is a telephone, would make a secondary representation of a
banana. They would manipulate this representation and they would use their
stored knowledge of ‘telephone’ to build on this pretence.
E
There is also evidence that social processes play a part in the development of TOM. Meins and her colleagues have found that what they term mind mindedness in maternal speech to six-month old infants is related to both security of attachment and to TOM abilities. Mind Mindedness involves speech that discusses infants’ feelings and explains their behaviour in terms of mental stages(e.g “you1 re feeling hungry”)
F
Lewis
investigated older children living in extended families in Crete and Cyprus.
They found that children who socially interact with more adults,who have more friends.
And who have more older siblings tend to pass TOM tasks at a slightly earlier
age than other children. Furthermore, because young children are more likely to
talk about their thoughts and feelings with peers than with their mothers, peer
interaction may provide a special impetus to the development of a TOM. A
similar point has been made by Dunn, who argues that peer interaction is more
likely to contain pretend play and that it is likely to be more challenging
because other children, unlike adults, do not make large adaptations to the
communicative needs of other children.
G
In
addition, there has been concern that some aspects of the TOM approach
underestimate children’s understanding of other people. After all,infants will point to
objects apparently in an effort to change a person’s direction of gaze and
interest; they can interact quite effectively with other people; they will
express their ideas in opposition to the wishes of others; and they will show
empathy for the feeling of others. Schatz studied the spontaneous speech of
three-year-olds and found that these children used mental terms,and used them in
circumstances where there was a contrast between, for example, not being sure
where an object was located and finding it, or between pretending and reality.
Thus the social abilities of children indicate that they are aware of the
difference between mental states and external reality at ages younger than
four.
H
A
different explanation has been put forward by Harris. He proposed that children
use ‘simulation’. This involves putting yourself in the other person’s
position, and then trying to predict what the other person would do. Thus
success on false belief tasks can be explained by children trying to imagine
what they would do if they were a character in the stories, rather than
children being able to appreciate the beliefs of other people. Such thinking
about situations that do not exist involves what is termed counterfactual
reasoning.
I
A different explanation has been put forward by Harris. He proposed that children use “simulation”. This involves putting yourself in the other person’s position, and then trying to predict what the other person would do. Thus, success on false belief tasks can be explained by children trying to imagine what they would do if they were a character in the stories, rather than children being able to appreciate the beliefs of other people. Such thinking about situations that do not exist involves what is termed counterfactual reasoning.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3
What Do Babies Know?
As
Daniel Haworth is settled into a high chair and wheeled behind a black screen,
a sudden look of worry furrows his 9-month-old brow. His dark blue eyes dart
left and right in search of the familiar reassurance of his mother’s face. She
calls his name and makes soothing noises, but Daniel senses something unusual
is happening. He sucks his fingers for comfort, but, finding no solace, his
month crumples, his body stiffens, and he lets rip an almighty shriek of
distress. This is the usual expression when babies are left alone or abandoned.
Mom picks him up, reassures him, and two minutes later, a chortling and alert
Daniel returns to the darkened booth behind the screen and submits himself to
baby lab, a unit set up in 2005 at the University of Manchester in northwest
England to investigate how babies think.
Watching
infants piece life together, seeing their senses, emotions and motor skills
take shape, is a source of mystery and endless fascination—at least to parents
and developmental psychologists. We can decode their signals of distress or
read a million messages into their first smile. But how much do we really know
about what’s going on behind those wide, innocent eyes? How much of their
understanding of and response to the world comes preloaded at birth? How much
is built from scratch by experience? Such are the questions being explored at
baby lab. Though the facility is just 18 months old and has tested only 100
infants, it’s already challenging current thinking on what babies know and how
they come to know it.
Daniel
is now engrossed in watching video clips of a red toy train on a circular
track. The train disappears into a tunnel and emerges on the other side. A
hidden device above the screen is tracking Daniel’s eyes as they follow the
train and measuring the diametre of his pupils 50 times a second. As the child
gets bored—or “habituated”, as psychologists call the process— his attention
level steadily drops. But it picks up a little whenever some novelty is
introduced. The train might be green, or it might be blue. And sometimes an
impossible thing happens— the train goes into the tunnel one color and comes
out another.
Variations
of experiments like this one, examining infant attention, have been a standard
tool of developmental psychology ever since the Swiss pioneer of the field,
Jean Piaget, started experimenting on his children in the 1920s. Piaget’s work
led him to conclude that infants younger than 9 months have no innate knowledge
of how the world works or any sense of “object permanence” (that people and
things still exist even when they’re not seen). Instead, babies must gradually
construct this knowledge from experience. Piaget’s “constructivist” theories
were massively influential on postwar educators and psychologist, but over the
past 20 years or so they have been largely set aside by a new generation of
“nativist” psychologists and cognitive scientists whose more sophisticated
experiments led them to theorise that infants arrive already equipped with some
knowledge of the physical world and even rudimentary programming for math and
language. Baby lab director Sylvain Sirois has been putting these smart-baby theories
through a rigorous set of tests. His conclusions so far tend to be more
Piagetian: “Babies,” he says, “know nothing.”
What
Sirois and his postgraduate assistant Lain Jackson are challenging is the
interpretation of a variety of classic experiments begun in the mid-1980s in
which babies were shown physical events that appeared to violate such basic
concepts as gravity, solidity and contiguity. In one such experiment, by
University of Illinois psychologist Renee Baillargeon, a hinged wooden panel
appeared to pass right through a box. Baillargeon and M.I.T’s Elizabeth Spelke
found that babies as young as 3 1/2 months would reliably look longer at the
impossible event than at the normal one. Their conclusion: babies have enough
built-in knowledge to recognise that something is wrong.
Sirois
does not take issue with the way these experiments were conducted. “The methods
are correct and replicable,” he says, “it’s the interpretation that’s the
problem.” In a critical review to be published in the forthcoming issue of the
European Journal of Developmental Psychology, he and Jackson pour cold water
over recent experiments that claim to have observed innate or precocious social
cognition skills in infants. His own experiments indicate that a baby’s
fascination with physically impossible events merely reflects a response to
stimuli that are novel. Data from the eye tracker and the measurement of the
pupils (which widen in response to arousal or interest) show that impossible
events involving familiar objects are no more interesting than possible events
involving novel objects. In other words, when Daniel had seen the red train
come out of the tunnel green a few times, he gets as bored as when it stays the
same color. The mistake of previous research, says Sirois, has been to leap to
the conclusion that infants can understand the concept of impossibility from
the mere fact that they are able to perceive some novelty in it. “The real
explanation is boring,” he says.
So how do babies bridge the gap between knowing squat and drawing triangles—a task Daniel’s sister Lois, 2 1/2, is happily tackling as she waits for her brother? “Babies have to learn everything, but as Piaget was saying, they start with a few primitive reflexes that get things going,” said Sirois. For example, hardwired in the brain is an instinct that draws a baby’s eyes to a human face. From brain imaging studies we also know that the brain has some sort of visual buffer that continues to represent objects after they have been removed—a lingering perception rather than conceptual understanding. So when babies encounter novel or unexpected events, Sirois explains, “there’s a mismatch between the buffer and the information they’re getting at that moment. And what you do when you’ve got a mismatch is you try to clear the buffer. And that takes attention.” So learning, says Sirois, is essentially the laborious business of resolving mismatches. “The thing is, you can do a lot of it with this wet sticky thing called a brain. It’s a fantastic, statistical-learning machine”. Daniel, exams ended, picks up a plastic tiger and, chewing thoughtfully upon its heat, smiles as if to agree.
IELTS Reading Recent Actual Test 12
READING PASSAGE 1
You should spend about 20 minutes on Questions 1-13 which
are based on Reading Passage 1
A.
Americans today choose among more options
in more parts of life than has ever been possible before. To an extent, the
opportunity to choose enhances our lives. It is only logical to think that if
some choices are good, more is better; people who care about having infinite
options will benefit from them, and those who do not can always just ignore the
273 versions of cereal they have never tried. Yet recent research strongly
suggests that, psychologically, this assumption is wrong, with 5% lower
percentage announcing they are happy. Although some choices are undoubtedly
better than none, more is not always better than less.
B.
Recent research offers insight into why
many people end up unhappy rather than pleased when their options expand. We began
by making a distinction between “maximizers” (those who always aim to make the
best possible choice) and “satisficers” (those who aim for “good enough,”
whether or not better selections might be out there).
C.
In particular, we composed a set of
statements—the Maximization Scale—to diagnose people’s propensity to maximize.
Then we had several thousand people rate themselves from 1 to 7 (from
“completely disagree” to “completely agree”) on such statements as “I never
settle for second best.” We also evaluated their sense of satisfaction with
their decisions. We did not define a sharp cutoff to separate maximizers from
satisficers, but in general, we think of individuals whose average scores are
higher than 4 (the scale’s midpoint) as maxi- misers and those whose scores are
lower than the midpoint as satisficers. People who score highest on the
test—the greatest maximizers—engage in more product comparisons than the lowest
scorers, both before and after they make purchasing decisions, and they take
longer to decide what to buy. When satisficers find an item that meets their
standards, they stop looking. But maximizers exert enormous effort reading
labels, checking out consumer magazines and trying new products. They also
spend more time comparing their purchasing decisions with those of others.
D.
We found that the greatest maximizers are
the least happy with the fruits of their efforts. When they compare themselves
with others, they get little pleasure from finding out that they did better and
substantial dissatisfaction from finding out that they did worse. They are more
prone to experiencing regret after a purchase, and if their acquisition
disappoints them, their sense of well-being takes longer to recover. They also
tend to brood or ruminate more than satisficers do.
E.
Does it follow that maximizers are less
happy in general than satisficers? We tested this by having people fill out a
variety of questionnaires known to be reliable indicators of wellbeing. As
might be expected, individuals with high maximization scores experienced less
satisfaction with life and were less happy, less optimistic and more depressed
than people with low maximization scores. Indeed, those with extreme
maximization ratings had depression scores that placed them in the borderline
of clinical range.
F.
Several factors explain why more choice is
not always better than less, especially for maximisers. High among these are
“opportunity costs.” The quality of any given option cannot be assessed in
isolation from its alternatives. One of the “costs” of making a selection is
losing the opportunities that a different option would have afforded. Thus, an
opportunity cost of vacationing on the beach in Cape Cod might be missing the
fabulous restaurants in the Napa Valley. Early Decision Making Research by
Daniel Kahneman and Amos Tversky showed that people respond much more strongly
to losses than gains. If we assume that opportunity costs reduce the overall
desirability of the most preferred choice, then the more alternatives there
are, the deeper our sense of loss will be and the less satisfaction we will
derive from our ultimate decision.
G.
The problem of opportunity costs will be
better for a satisficer. The latter’s “good enough” philosophy can survive
thoughts about opportunity costs. In addition, the “good enough” standard leads
to much less searching and inspection of alternatives than the maximizer’s
“best” standard. With fewer choices under consideration, a person will have
fewer opportunity costs to subtract.
H.
Just as people feel sorrow about the
opportunities they have forgone, they may also suffer regret about the option
they settled on. My colleagues and I devised a scale to measure proneness to
feeling regret, and we found that people with high sensitivity to regret are
less happy, less satisfied with life, less optimistic and more depressed than
those with low sensitivity. Not surprisingly, we also found that people with
high regret sensitivity tend to be maximizers. Indeed, we think that worry over
future regret is a major reason that individuals become maximizers. The only
way to be sure you will not regret a decision is by making the best possible
one. Unfortunately, the more options you have and the more opportunity costs
you incur, the more likely you are to experience regret.
I. In a classic demonstration of the power of sunk costs, people were offered season subscriptions to a local theatre company. Some were offered the tickets at full price and others at a discount. Then the researchers simply kept track of how often the ticket purchasers actually attended the plays over the course of the season. Full-price payers were more likely to show up at performances than discount payers. The reason for this, the investigators argued, was that the full-price payers would experience more regret if they did not use the tickets because not using the more costly tickets would constitute a bigger loss. To increase sense of happiness, we can decide to restrict our options when the decision is not crucial. For example, make a rule to visit no more than two stores when shopping for clothing.
READING PASSAGE 2
You should spend about 20 minutes on Questions
14-26 which are based on Reading Passage 2
Eco-Resort Management Practices
Ecotourism
is often regarded as a form of nature-based tourism and has become an important
alternative source of tourists. In addition to providing the traditional
resort-leisure product, it has been argued that ecotourism resort management
should have a particular focus on best-practice environmental management. an educational
and interpretive component, and direct anil indirect contributions to the
conservation of the natural and cultural environment (Ayala. I996).
Conran Cove Island Resort is a large integrated ecotourism-based resort located south of Brisbane on the Gold Coast, Queensland. Australia. As the world’s population becomes increasingly urbanised, the demand for tourist attractions which are environmentally friendly, serene and offer amenities of a unique nature has grown rapidly. Couran Cove Resort, which is one such tourist attractions, is located on South Stradbroke Island, occupying approximately 150 hectares of the island. South Stradbroke Island is separated from die mainland by the Broadwater, a stretch of sea .’ kilometres wide. More than a century ago. there was only one Stradbroke Island, and there were at least four Aboriginal tribes living and limiting on the island. Regrettably, most of the original island dwellers were eventually killed by diseases such as tuberculosis, smallpox and influenza by the end of the 19th century. The second ship wrecked on the island in 1894, and the subsequent destruction of the ship (the Cambus Wallace) because it contained dynamite, caused a large crater in the sandhills on Stradbroke Island. Eventually. the ocean bloke through the weakened land form and Stradbroke became two islands. Conran Cove Island Resort is built on one of the world’s lew naturally -occurring sand lands, which is home to a wide range of plant communities and one of the largest remaining remnants of the rare livistona rainforest left on the Gold Coast. Many mangrove and rainforest areas, and Malaleuca Wetlands on South Stradbroke Island (and in Queensland), have been cleared, drained or filled for residential, industrial, agricultural or urban development in the first half of the 20th century. Farmers and graziers finally abandoned South Stradbroke Island in 1959 because the vegetation and the soil conditions there were not suitable for agricultural activities.
SUSTAINABLE
PRACTICES OF COUKAN COVE RESORT
Being
located on an offshore island, the resort is only accessible by means of water
transport. The resort provides hourly ferry service from the marina on the
mainland to and from the island. Within the resort. transport modes include
walking trails, bicycle tracks and the beach train. The reception area is the
counter of the shop which has not changed for 8 years at least. The
accommodation is an octagonal “Bure’’. These are large rooms that are clean but
the equipment is tiled and in some cases just working. Our ceiling fan only
worked on high speed for example. Beds are hard but clean. There is a
television, a radio, an old air conditioner and a small fridge. These “Bures”
are right on top of each other and night noises do carry. so he careful what
you say and do. The only tiling is the mosquitoes, but if you forget to bring
mosquito repellant they sell some oil the island.
As an
ecotourism-based resort most of the planning and development of the attraction
lias been concentrated on the need lo co-exist with the fragile natural
environment of South Stradbroke Island io achieve sustainable development.
WATER
AND ENERGY MANAGEMENT
South Stradbroke Island has groundwater at the centre of the island, which has a maximum height of 3 metres above sea level. The water supply is recharged by rainfall and is commonly known as an unconfined freshwater aquifer. Couran Cove Island Resort obtains its water supply by tapping into this aquifer and extracting it via a bore system. Some of the problems which have threatened the island’s freshwater supply include pollution, contamination and over-consumption. In order to minimise some of these problems, all laundry activities are carried out on the mainland. The resort considers washing machines as onerous to the island’s freshwater supply, and that the detergents contain a high level of phosphates which are a major source of water pollution. The resort uses LPG-power generation rather than a diesel-powered plant for its energy supply, supplemented by wind turbine, which has reduced greenhouse emissions by 70% of diesel-equivalent generation methods. Excess heat recovered from the generator is used to heat the swimming pool. Hot water in the eco-cabins and for some of the resort’s vehicles are solar-powered. Water efficient fittings are also installed in showers and toilets. However, not all the appliances used by the resort arc energy efficient, such as refrigerators. Visitors who stay at the resort are encouraged to monitor their water and energy usage via the in-house television systems, and are rewarded with prizes (such as a free return trip to the resort) accordingly if their usage level is low.
CONCLUDING
REMARKS
We examined a case study of good management practice and a pro-active sustainable tourism stance of an eco-resort. In three years of operation, Couran Cove Island Resort has won 23 international and national awards, including the 2001 Australian Tourism Award in the 4-Star Accommodation category. The resort has embraced and has effectively implemented contemporary environmental management practices. It has been argued that the successful implementation of the principles of sustainability should promote long-term social, economic and environmental benefits, while ensuring and enhancing the prospects of continued viability for the tourism enterprise. Couran Cove Island Resort does not conform to the characteristics of the Resort Development Spectrum, as proposed by Pridcaux (2000). According to Pridcaux. the resort should be at least at Phase 3 of the model (the National tourism phase), which describes an integrated resort providing 3-4 star hotel-type accommodation. The primary tourist market in Phase 3 of the model consists mainly of interstate visitors. However, the number of interstate and international tourists visiting the resort is small, with the principal visitor markets comprising locals and residents front nearby towns and the Gold Coast region. The carrying capacity of Couran Cove docs not seem to be of any concern to the Resort management. Given that it is a private commercial ecotourist enterprise, regulating the number of visitors to the resort to minimise damage done to the natural environment on South Stradbrokc Island is not a binding constraint. However, the Resort’s growth will eventually be constrained by its carrying capacity, and quantity control should be incorporated in the management strategy of the resort.
READING PASSAGE 3
You should spend about 20 minutes on Questions
27-40 which are based on Reading Passage 3
Theory or Practice?
—What is the point of research carried out by biz
schools?
Students
go to universities and other academic institutions to prepare for their future.
We pay tuition and struggle through classes in the hopes that we can find a
fulfilling and exciting career. But the choice of your university has a large
influence on your future. How can you know which university will prepare you
the best for your future? Like other academic institutions, business schools
are judged by the quality of the research carried out by their faculties.
Professors must both teach students and also produce original research in their
own field. The quality of this research is assessed by academic publications.
At the same time, universities have another responsibility to equip their
students for the real world, however that is defined. Most students learning
from professors will not go into academics themselves—so how do academics best
prepare them for their future careers, whatever that may be? Whether academic
research actually produces anything that is useful to the practice of business,
or even whether it is its job to do so, are questions that can provoke vigorous
arguments on campus.
The
debate, which first flared during the 1950s, was reignited in August, when
AACSB International. the most widely recognised global accrediting agency for
business schools, announced it would consider changing the way it evaluates
research. The news followed rather damning criticism in 2002 from Jeffrey
Pfefler. a Stanford professor, and Christina Fong of Washington University,
which questioned whether business education in its current guise was
sustainable. The study found that traditional modes of academia were not
adequately preparing students for the kind of careers they faced in current
times. The most controversial recommendation in AACSB’s draft report (which was
sent round to administrators for their comment) is that the schools should be
required to demonstrate the value of their faculties’ research not simply by
listing its citations in journals, but by demonstrating the impact it has in
the professional world. New qualifiers, such as average incomes, student
placement in top firms and business collaborations would now be considered just
as important as academic publications.
AACSB
justifies its stance by saying that it wants schools and faculty to play to
their strengths, whether they be in pedagogy, in the research of practical
applications, or in scholarly endeavor. Traditionally, universities operate in
a pyramid structure. Everyone enters and stays in an attempt to be successful
in their academic field. A psychology professor must publish competitive
research in the top neuroscience journals. A Cultural Studies professor must
send graduate students on new field research expeditions to be taken seriously.
This research is the core of a university’s output. And research of any kind is
expensive—AACSB points out that business schools in America alone spend more
than $320m a year on it. So it seems legitimate to ask for,’what purpose it is
undertaken?
If a
school chose to specialise in professional outputs rather than academic
outputs, it could use such a large sum of money and redirect it into more fruitful
programs. For example, if a business school wanted a larger presence of
employees at top financial firms, this money may be better spent on a career
center which focuses on building the skills of students, rather than paying for
more high-level research to be done through the effort of faculty. A change in
evaluation could also open the door to inviting more professionals from
different fields to teach as adjuncts. Students could take accredited courses
from people who are currently working in their dream field. The AACSB insists
that universities answer the question as to why research is the most critical
component of traditional education.
On one
level, the question is simple to answer. Research in business schools, as
anywhere else, is about expanding the boundaries of knowledge; it thrives on
answering unasked questions. Surely this pursuit of knowledge is still
important to the university system. Our society progresses because we learn how
to do things in new ways, a process which depends heavily on research and
academics. But one cannot ignore the other obvious practical uses of research
publications. Research is also about cementing schools’ and professors’
reputations. Schools gain kudos from their faculties’ record of publication:
which journals publish them, and how often. In some cases, such as with
government-funded schools in Britain, it can affect how much money they
receive. For professors, the mantra is often “publish or perish”. Their careers
depend on being seen in the right journals.
But at
a certain point, one has to wonder whether this research is being done for the
benefit of the university or for the students the university aims to teach.
Greater publications will attract greater funding, which will in turn be spent
on better publications. Students seeking to enter professions out of academia
find this cycle frustrating, and often see their professors as being part of
the “Ivory Tower” of academia, operating in a self-contained community that has
little influence on the outside world.
The
research is almost universally unread by real-world managers. Part of the
trouble is that the journals labour under a similar ethos. They publish more
than 20,000 articles each year. Most of the research is highly quantitative,
hypothesis-driven and esoteric. As a result, it is almost universally unread
by real-world managers. Much of the research criticises other published
research. A paper in a 2006 issue of Strategy & Leadership commented that
“research is not designed with managers’ needs in mind, nor is it communicated
in the journals they read. For the most part, it has become a self-referential
closed system irrelevant to corporate performance.” The AACSB demands that this
segregation must change for the future of higher education. If students must
invest thousands of dollars for an education as part of their career path, the
academics which serve the students should be more fully incorporated into the
professional world. This means that universities must focus on other strengths
outside of research, such as professional networks, technology skills, and
connections with top business firms around the world. Though many universities
resisted the report, today’s world continues to change. The universities which
prepare students for our changing future have little choice but to change with
new trends and new standards.