Answer the following questions.
Поможем в ✍️ написании учебной работы
Поможем с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой

1. What are the questions the author asks himself about his past experiences?

2. How do we usually see “training?”

3. Does the author see the difference between “training and “learning?”

4. What positive things are we usually trained to do?

5. What is the author’s main concern in the following paragraph? “But when we’re very young, as we are when we start school, how are we able to discern between what’s being taught (information and processes) and what we’re being trained to do (behaviors, actions and reactions)? Are we at all able to pick and choose which things we want to be trained at, and which things we’d prefer not to learn?”

6. Do you think some of our problems can be the result of the way we’re trained to be?

7. Why could we be trained to do wrong things?

8. What questions should we usually ask ourselves if we want to learn more about how we came to be the way we are?

9. Could the training have done more to limit us in our lives?

10. Does training focus on getting people to do things in predictable ways?

11. What parts of you are the result of training?

12. Has that training been of value to you, and helped you to grow?

13. Formulate the message of the article.

  

COLLEGES and UNIVERSITIES

1. What do colleges and universities provide?

2. What jobs require a college education training?

3. What has created high demand for workers with skills that can be acquired at colleges or universities?

4. What do employers seek to find in college or university graduates?

5. Name the types of colleges.

6. Speak on independent of a university colleges and community colleges.

7. What do community colleges offer?

8. Speak on the following: a. public colleges and universities as state institutions b. types of independent colleges and universities c. professional schools – division of large universities

9. Use the expressions from the text to speak on the general roles of faculty: to instruct, to advise, to do original research, to teach courses, to publish findings, to include the findings in the courses, to direct graduates in preparing their master’s theses and doctoral dissertations, to serve on committees, to be active members of professional societies

10. Speak on faculty ranks: a. instructor or assistant professor b. associate professor c. full professor

11. What rank carries tenure?

12. Speak on the methods of instruction: lecture, lecture-discussion, discussion, laboratories, seminar, internship, clinical experience, community service, distance education

Marrying High-Tech and the Humanities

By Michael Dirda

Computer technology may seem an unlikely research tool for a literature professor hoping to

better understand William Shakespeare’s plays or for an artist creating a painting. Increasingly,

however, computers and software are becoming essential tools for literary criticism, academic

publishing, music composition, and sometimes even for the creation of fine art. In this article from

the May 1999 Encarta Yearbook, Pulitzer Prize-winning writer Michael Dirda explores the cutting

edge of this unlikely combination of computer-based technology and disciplines such as art,

literature, and music.

 

 

The dancer jumps and bends and pirouettes across the stage. Instead of being watched by an

audience, however, she is monitored by a computer, which inputs data from her every move.

In an experiment by Joseph Paradiso, principal research scientist and director of the responsive

Environment’s group at the Massachusetts Institute of Technology (MIT) Media Lab in

Cambridge, a dancer's shoes are fitted with sensors that measure pressure points, bend, tilt,

height off the stage, kicks, stomps, and spin. This information is then transmitted via a radio

link to a computer programmed to change the data into images or sounds. In this way the

dancer could simultaneously create a musical composition and a visual light show as she

performed, perhaps to be combined as part of a multimedia piece.

A flight of fancy or a glimpse of the future? Computers and digital technology are rapidly

expanding to influence every aspect of human activity at the end of the 20th century. The

ageless urge of the human species to produce works of art is no exception to this. Technology is

becoming so important in so many categories of the arts that we seem to be in the midst of a

new Renaissance.

The examples of this flowering are everywhere. Powerful new computers are allowing more and

more data to be created and stored digitally, that is, in the binary code that makes up the basic

language of computers. Artists manipulate images to generate complex digital collages and

exhibit their digital and nondigital art on the World Wide Web. Novelists create branching

narratives called hypertext fiction, stories that are explored as much as read. Literary scholars

exchange ideas through online discussion groups and use computers to discover the author of

an unsigned poem hundreds of years old. Composers employ synthesizers and computers to

generate sounds never heard before, while librarians and museum curators digitize entire

collections of art and literature to be accessed online from anywhere in the world. As these

activities become more and more the standard rather than the exception, technology and art

will be further paired and enmeshed.

Crunching Texts

Computers have aided in the study of humanities for almost as long as the machines have

existed. Decades ago, when the technology consisted solely of massive, number-crunching

mainframe computers, the chief liberal arts applications were in compiling statistical indexes of

works of literature. In 1964, International Business Machines Corporation (IBM) held a

conference on computers and the humanities where, according to a 1985 article in the journal

Science, “most of the conferees were using computers to compile concordances, which are

alphabetical indices used in literary research.”

Mainframe computers helped greatly in the highly laborious task, which dates back to the

Renaissance, of cataloging each reference of a particular word in a particular work.

Concordances help scholars scrutinize important texts for patterns and meaning. Other

humanities applications for computers in this early era of technology included compiling

dictionaries, especially for foreign or antiquated languages, and cataloging library collections.

Such types of computer usage in the humanities may seem limited at first, but they have

produced some interesting results in the last few years and promise to continue to do so. As

computer use and access have grown, so has the number of digitized texts of classic literary

works.

The computer-based study of literary texts has established its own niche in academia. Donald

Foster, an English professor at Vassar College in Poughkeepsie, New York, is one of the leaders

in textual scholarship. In the late 1980s Foster created SHAXICON, a database that tracks all

the “rare” words used by English playwright William Shakespeare. Each of these words appears

in any individual Shakespeare play no more than 12 times. The words can then be cross-

referenced with some 2,000 other poetic texts, allowing experienced researchers to explore

when they were written, who wrote them, how the author was influenced by the works of other

writers, and how the texts changed as they were reproduced over the centuries.

In late 1995 Foster's work attracted widespread notice when he claimed that Shakespeare was

the anonymous author of an obscure 578-line poem, A Funeral Elegy (1612). Although experts

had made similar claims for other works in the past, Foster gained the backing of a number of

prominent scholars because of his computer-based approach. If Foster's claim holds up to long-

term judgment, the poem will be one of the few additions to the Shakespearean canon in the

last 100 years.

Foster's work gained further public acclaim and validation when he was asked to help identify

the anonymous author of the best-selling political novel Primary Colors (1996). After using his

computer program to compare the stylistic traits of various writers with those in the novel,

Foster tabbed journalist Joe Klein as the author. Soon after, Klein admitted that he was the

author. Foster was also employed as an expert in the case of the notorious Unabomber, a

terrorist who published an anonymous manifesto in several major newspapers in 1995.

Foster is just one scholar who has noted the coming of the digital age and what it means for

traditional fields such as literature. “For traditional learning and humanistic scholarship to be

preserved, it, too, must be digitized,” he wrote in a scholarly paper. “The future success of

literary scholarship depends on our ability to integrate those electronic texts with our ongoing

work as scholars and teachers, and to exploit fully the advantages offered by the new medium.”

Foster noted that people can now study Shakespeare via Internet Shakespeare Editions, using

the computer to compare alternate wordings in different versions and to consult editorial

footnotes, literary criticism, stage history, explanatory graphics, video clips, theater reviews, and

archival records. Novelist and literary journalist Gregory Feeley noted that “the simplest (and

least radical) way in which computer technology is affecting textual scholarship is in making

various texts available, and permitting scholars to jump back and forth between them for easy

comparisons.”

Scholars can also take advantage of computer technology in “publishing” their work. Princeton

University history professor Robert Darnton has written of a future in which works of

scholarship are presented digitally in a pyramid-like layering. One might start, he suggests, at

the top with a concise account of a subject, then proceed to detailed documentation and

evidence, continue with a level of questions and discussion points for classroom use, and end

with a place for reports and commentary from readers.

The Power of the Web

Using computers for high-level research such as textual scholarship became feasible as more

and more literary works were digitized during the 1980s. But an important piece of the puzzle

was missing: a way to easily distribute these texts and other digital data. As the 1985S c i e n c e

article noted, “There is always the possibility … that students will be able to download both text

and programs directly into the memories of their microcomputers, but it is difficult to imagine

national centers able to distribute files to millions of students around the country.”

This unlikely concept became a reality in the early 1990s with the development of the World

Wide Web. In 1989 British computer scientist Timothy Berners-Lee designed the Web for the

European Laboratory for Particle Physics (CERN) so that scientists working in various locations

could share research and collaborate on projects. But the idea of sharing information soon

spread far beyond anyone's wildest imagination. Humanities scholars and students were quick

to realize the potential of this technology.

Suddenly, a professor in India could post his latest paper about Irish writer James Joyce to be

analyzed by other Joyce scholars around the world, and get quick feedback through e-mail

messages. The Web also allowed a student writing a paper in her dorm room in California to

access a rare original text on a computer in New York or Nigeria. Why bother actually going to

a bricks-and-mortar library? “The World Wide Web has replaced the library, for many of our

students, as the obvious site for conducting original research,” Foster noted.

Virtual Libraries

The rise of the Internet, the so-called Information Highway, has started to transform that

cornerstone of academic research, the library. Increasingly, libraries are becoming places to visit

Page 11 of 20

online rather than in person. The New York Public Library, for example, “dispenses so much

information electronically to readers all over the world that it reports ten million hits on [visits

to] its computer system each month as opposed to 50,000 books dispensed in its reading

room,” wrote Darnton in the New York Review of Books in March 1999.

The Library of Congress (LOC) in Washington, D.C., the unofficial national library of the

United States, has long been a leader in the use of digital technology. Chief among these

efforts is its drive to create a National Digital Library. Begun in the early 1990s, this vast,

ongoing project aims to put much of the LOC's collections of historic and archival documents

online. Some of these documents are too fragile to be handled by the public and were

previously unavailable, but now even a 7-year-old can peruse them on the Web.

About 1.7 million items had been put up on the Web site by April 1999, with a goal of 5

million by the time the library celebrates its 200th anniversary in April 2000. However, library

officials have a long way to go before reaching their ultimate goal of 80 million unique items

online.

Recently the LOC received grants to digitize its collections relating to American inventors

Alexander Graham Bell and Samuel F. B. Morse. Library officials point with pride to the

widespread use of its Web site, American Memory: Historical Collections for the National

Digital Library, from which one can view extensive photographs and documents about the

history of African Americans or a digitized collection of 2,100 early baseball cards from the

years 1887 to 1914. Users can also search the library's enormous holdings or access reading lists

for kids (“Read All About It”).

New Medium

More than just a revolutionary tool for indexing, analyzing, or transmitting content, digital

technology is actually reshaping the creation of art and literature. “Just as film emerged as the

dominant artistic medium of the 20th century, the digital domain—whether it is used for visual

art, music, literature or some other expressive genre—will be the primary medium of the 21st,”

wrote New York Times columnist Matthew Mirapaul in early 1999. More and more writers,

artists, and musicians are using computers and the Internet to enhance, animate, or completely

remake their art, with unconventional and remarkable results.

Publishing, a print-based business that to some people is beginning to represent the past, is

attempting to adapt to the new digital world. Marc Aronson, a senior children's book editor at

the publishing house Henry Holt and a longtime student of the impact of changing technology

on publishing, describes this impact as a kind of blurring or hybridization. “The keynote of the

digital age is overlap, multiplicity, synergy. The digital does not replace print, it subsumes it,”

Aronson said. “Print becomes a form of the digital, just as the digital has a special place when it

appears in print.” Especially in books for young people, he notes, more authors and artists are

trying books with multiple storylines or told from various points of view.

One strain of this new type of nonlinear writing is popularly known as hypertext fiction. At its

simplest, hypertext fiction mimics the Choose Your Own Adventure books that became

popular in the early 1980s. In these books, readers directed the story by choosing which page to

turn to at key points based on what they wanted the character to do. In hypertext fiction, the

reader explores different branches of a story on a computer by clicking on hyperlinks in the

text. The result is a fragmented, slightly surreal narrative in which time is not linear and there

is no obvious conclusion.

Michael Joyce, like Foster a professor of English at Vassar, is a leading theoretician and author

of hypertext fiction. He wrote what is widely considered the first major work of hypertext

fiction, afternoon, a story (1990). The piece consists of more than 500 different screens, or pages,

which are connected by more than 900 links.a f t e r n o o n centers on a man who witnesses a

serious car accident that may or may not have involved his ex-wife and son, who may or may

not have survived. Joyce has also published Twilight, A Symphony (1996), about a man estranged

from his wife who is on the run with their infant son.

Joyce defines hypertext fiction as “stories that change each time you read them.” He notes that

“interactive narrative does not necessarily mean multiple plot lines, but can also mean

exploring the multiple thematic lines or contours of a story.”

Not surprisingly, hypertext has frequently come under attack from traditional critics. Perhaps

the most powerfully simple critique, however, comes from Charles Platt, a contributing editor

forW i r e d magazine and a prominent science-fiction writer and critic. “Could it be,” wonders

Platt, “that storytelling really doesn't work very well if the user can interfere with it?” People

really want the author, scriptwriter, or actors to do the heavy lifting of narrative, he argues. On

the other hand, Platt suspects that we have hardly begun to explore true interactive media and

that it will be utterly different from fiction as we know it today.

Roll Over, Beethoven

Although the distribution of recorded music went digital with the introduction of the compact

disc in the early 1980s, technology has had a large impact on the way music is made and

recorded as well. At the most basic level, the invention of MIDI (Musical Instrument Digital

Interface), a language enabling computers and sound synthesizers to talk to each other, has

given individual musicians powerful tools with which to make music.

“The MIDI interface enabled basement musicians to gain power which had been available only

in expensive recording studios,” Platt observed. “It enables synthesis of sounds that have never

existed before, and storage and subsequent simultaneous replay and mixing of multiple sound

tracks. Using a moderately powerful desktop computer running a music composition program

and a $500 synthesizer, any musically literate person can write—and play!—a string quartet in an

afternoon.”

Serious music scholars and composers are also utilizing computers to forge new paths in music.

A prime example is David Cope, professor of music at the University of California at Santa

Cruz, who began developing a computer music program in the early 1980s. Cope originally

wanted a program that would help him overcome mental blocks when he composed. Through

years of tinkering, the software, called Experiments in Musical Intelligence (EMI), has become a

full-fledged compositional program. Cope supplies bits of musical information to EMI, which

has been designed to recognize a variety of styles and patterns, and the program then processes

this material to generate pieces of original music.

The result is “disturbing,” said composer Douglas Hofstadter, Pulitzer Prize-winning author of

the book Gödel, Escher, Bach: An Eternal Golden Braid (1979). “You can actually get pretty good

music.”

Whereas many musicians use computers as a tool in composing or producing music, Tod

Machover uses computers to design the instruments and environments that produce his music.

As a professor of music and media at the MIT Media Lab, Machover has pioneered

hyperinstruments: hybrids of computers and musical instruments that allow users to create

sounds simply by raising their hands, pointing with a “virtual baton,” or moving their entire

body in a “sensor chair.”

Similar work on a “virtual orchestra” is being done by Geoffrey Wright, head of the computer

music program at Johns Hopkins University's Peabody Conservatory of Music in Baltimore,

Maryland. Wright uses conductors' batons that emit infrared light beams to generate data

about the speed and direction of the batons, data that can then be translated by computers into

instructions for a synthesizer to produce music.

In Machover's best-known musical work, Brain Opera (1996), 125 people interact with each

other and a group of hyperinstruments to produce sounds that can be blended into a musical

performance. The final opera is assembled from these sound fragments, material contributed

by people on the Web, and Machover's own music. Machover says he is motivated to give

people “an active, directly participatory relationship with music.”

More recently, Machover helped design the Meteorite Museum, a remarkable underground

museum that opened in June 1998 in Essen, Germany. Visitors approach the museum through

a glass atrium, open an enormous door, enter a cave, and then descend by ramps into various

multimedia rooms. Machover composed the music and designed many of the interactions for

these rooms. In the Transflow Room, the undulating walls are covered with 100 rubber pads

shaped like diamonds. “By hitting the pads you can make and shape a sound and images in the

room. Brain Opera was an ensemble of individual instruments, while the Transflow Room is a

single instrument played by 40 people. The room blends the reactions and images of the

group.”

Machover believes that music is in general poorly served in elementary schools and hopes to

change this. His inventions, including some intended specifically for children, are designed to

help bring music education and appreciation to a wider audience. Machover is convinced that

computer science will eventually become a permanent part of regular musical training.

Machover's projects at MIT include Music Toys and Toys of Tomorrow, which are creating

devices that he hopes will eventually make a Toy Symphony possible. Machover describes one

of the toys as an embroidered ball the size of a small pumpkin with ridges on the outside and

miniature speakers inside. “We've recently figured out how to send digital information through

fabric or thread,” he said. “So the basic idea is to squeeze the ball and where you squeeze and

where you place your fingers will affect the sound produced. You can also change the pitch to

high or low, or harmonize with other balls.”

Computer music has a long way to go before it wins mass acceptance, however. Martin

Goldsmith, host of National Public Radio's Performance Today, explains why: “I think that a

reason a great moving piece of computer music hasn't been written yet is that—in this instance—

the technology stands between the creator and the receptor and prevents a real human

connection,” Goldsmith said. “All that would change in an instant if a very accomplished

composer—a Steve Reich or John Corigliano or Henryk Górecki—were to write a great piece of

computer music, but so far that hasn't happened. Nobody has really stepped forward to make a

wide range of listeners say, „Wow, what a terrific instrument that computer is for making

music!‟ ”

But Is It Art?

The art world has also seen the impact of digital technology in varying degrees and methods. As

is often evident in their work, many artists constantly push the boundaries of art and the tools

and materials with which they work. New mediums are not burdened by the weight of history,

and they provide the artist with a fresh means of expression.

Digital art can be generally divided into two areas: art that is either made with or relies on

computers and can be printed out or is otherwise three-dimensional, and art that is completely

contained within the digital world. Early physical pieces were mostly printouts from computer

graphics programs, but a November 1998 show at the School of Visual Arts in New York City

included elaborate interactive art.

One piece at this show, Office Plant #1 (1998), is a sort of mechanical flower that blooms or

wilts—and even groans—in reaction to the contents of e-mail messages on an attached computer.

Other pieces have audio soundtracks, video displays, and moving parts. Another show, the

Boston Cyberarts Festival held in May 1999, included a wide variety of new technology art.

One featured example was the work of French artist Christian Lavigne, who is a pioneer in the

field ofc y b e r s c u l p t u r e (virtual sculpture on the Web) andr o b o s c u l p t u r e (sculpture done with the

aid of computer-controlled machines).

One unique approach to computer art is the path taken by British artist Harold Cohen, who

became interested in computers and art as far back as the late 1960s. Cohen, a well-known

abstract painter in his own right, has spent more than two decades creating and refining a

“robot artist” he calls Aaron. Cohen has painstakingly programmed Aaron to draw and paint

with a mechanized arm, from basic shapes to, more recently, human forms. Cohen has had to

program the computer with data on proportion, depth, visual angles, color, and other

concepts. No two of Aaron's paintings are alike, and the results are impressive enough to cause

some people to wonder who is actually creating the art, the human programmer or the

computer.

With the explosive growth of the Internet and World Wide Web, much recent attention has

focused on online art. In December 1995 art critic and writer Robert Atkins wrote in the

magazine Art in America that the 1994-1995 art season would be known as “the year the art

world went online.” The first commercial art galleries opened on the Internet, and physical

installations such as Antonio Muntadas's The File Room (1994)—a detailed look at the history of

censorship—also went up on the Web. Other works soon followed that mixed Web-based

design with artistic statements. Artists began to see the Internet not just as a means for

publicity or distribution of art works but also as a medium of expression in itself.

A little over three years later, Atkins described in the same publication the growing number of

“original, interactive works that can only be experienced on the Net, rather than the digitized

images of paintings or photographs that characterize most gallery or museum [Web] sites. Many

online pieces now capitalize on the burgeoning capacity of the Web to deliver video and sound,

as well as text and graphics.” Atkins points to one striking work, American Friederike Paetzold's

I-Section (1998), in which the visitor “dissects” a torso, removing organs to reveal multiple

layers of imagery and text.

By combining elements of hypertext fiction and computer music with visual media such as

photographs and video, digital artists are breaking down old artistic barriers and producing

works for all the senses. Science fiction and fantasy author Richard Grant sees this as the

ultimate goal. “When I think of hypertext—and computer-driven art forms in general—these

days, I think of opera. Specifically, I think of [German composer] Richard Wagner, his idea of

theG e s a m t k u n s t w e r k (total art work), a sort of Grand Unified Field Theory in which opera is

seen as the final summation of all previous art forms: music, literature, drama, painting,

sculpture (present in the construction of the sets), poetry, dance, public ritual, and sheer

spectacle (or what would now be called special effects).” Increasingly, academic programs—such

as the Consortium for Research and Education in the Arts and Technology (CREAT) at the

University of Central Florida in Orlando—are bringing students from various disciplines

together to generate such innovative multimedia pieces.

Future Shakespeares?

New technology has always led to innovation in the arts. After all, the favorite watchword of

the poet, painter, or composer is “Make it new.” For this artists can use new tools, and the

computer is one of the most powerful tools in human history. As the digital future looms ever

closer, the biggest difference may be that the artists of tomorrow will use digital tools as a

matter of course. The next genius to reshape the world of the arts—the next Wolfgang Amadeus

Mozart, Pablo Picasso, or William Shakespeare—could be a 14-year-old just now beginning to

experiment with her home computer. And she will not be alone. At the cusp of the new

millennium, digital technology seems poised to make artists and creators of us all

 

Интеграция России в Болонский процесс Фёдор Гоголин

Болонский процесс- движение, целью которого является создание единого образовательного пространства. Российская Федерация присоединилась к Болонскому процессу в сентябре 2003г. на Берлинской конференции, обязавшись до 2010 г. воплотить в жизнь основные принципы Болонского процесса.
Формирование общеевропейской системы высшего образования в рамках Болонского процесса основано на общности фундаментальных принципов функционирования высшего образования. Предложения , рассматриваемые в рамках Болонского процесса, сводятся к следующему:

· введение двухуровневого обучения;

· введение кредитной системы;

· контроль качества образования;

· расширение мобильности;

· обеспечение трудоустройства выпускников;

· обеспечение привлекательности европейской системы образования .

Однако мероприятия, связанные с реализацией этих предложений, вызывают противоречивые оценки и усиливающуюся дискуссию. Необходим анализ путей и средств оптимальной интеграции российского высшего образования в европейское образовательное пространство.
Во всех развитых странах наблюдаются сходные тенденции в высшем образовании, поэтому часть назревших изменений в России объективно совпадает с рекомендациями Болонской Декларации. Проблемы, стимулирующие Болонский процесс, во многом характерны и для России. Очевидно также, что самоизоляция от мирового образовательного пространства может иметь отрицательные последствия для любой национальной образовательной системы. В связи с этим следует объединять усилия по развитию образования, сохраняя при этом национальные достижения и традиции. Это позволит сделать российское высшее образование более конкурентоспособным. Необходимо развивать международную интеграцию, сохраняя все лучшее из собственного опыта.



Дата: 2019-12-10, просмотров: 253.