What is New Media?
New Media is a relatively new field of study that has developed around cultural practices with the computer playing a central role as the medium for production, storage and distribution.
"New" in this context means:
• the relative novelty of digital computing
• the unprecedented speed of evolution and mutation of devices and technologies
• undeveloped, imperfect and experimental environments
• subjective novelty, most of the artists and theoreticians currently studying digital culture have migrated from different disciplines.
New media refers to new forms of human and media communication that have been transformed by the creative use of technology to fulfil the same basic social need to interact and transact.
Although definitions of the term vary, this is sometimes assumed to imply two consistent characteristics:
• Uniquely individualized information can simultaneously be delivered or displayed to a potentially infinite number of people.
• All players involved (e.g. publishers, broadcasters, consumers) share equal or reciprocal control over content.
World Wide Web
The World Wide Web ("WWW" or simply the "Web") is a global, read-write information space. Text documents, images, multimedia and many other items of information, referred to as resources, are identified by short, unique, global identifiers called Uniform Resource Identifiers (URIs) so that each can be found, accessed and cross-referenced in the simplest possible way.
The term is often mistakenly used as a synonym for the Internet itself, but the Web is actually something that is available over the Internet, just like e-mail and many other Internet services.
The World Wide Web is the combination of four basic ideas:
• Hypertext, that is the ability, in a computer environment, to move from one part of a document to another or from one document to another through internal connections among these documents (called "hyperlinks");
• Resource Identifiers, that is the ability, on a computer network, to locate a particular resource (computer file, document or other resource) on the network through a unique identifier;
• The Client-server model of computing, in which client software or a client computer makes requests of server software or a server computer that provides the client with resources or services, such as data or files; and
• Markup language, in which characters or codes embedded in text indicate structure, semantic meaning or advice on presentation.
On the World Wide Web, a client program called a web browser retrieves information resources, such as web pages and other computer files, from web servers using their URLs and displays them, typically on a computer monitor. One can then follow hyperlinks in each page to other resources on the World Wide Web whose location is provided by these hyperlinks. It is also possible, for example by filling in and submitting web forms, to post information back to a web server for it to save or process in some way. The act of following hyperlinks is often called "browsing" or "surfing" the Web. Web pages are often arranged in collections of related material called "websites."
Uniform Resource Locator
A Uniform Resource Locator (URL) is a string of characters conforming to a standardized format, which refers to a resource on the Internet (such as a document or an image) by its location. For example, the URL of this page on Wikipedia is http://en.wikipedia.org/wiki/Uniform_Resource_Locator.
An HTTP URL, commonly called a web address, is usually shown in the address bar of a web browser.The term is typically pronounced as either a spelled-out initialism ("yoo arr ell") or as an acronym (earl or ural as in the Ural Mountains).Tim Berners-Lee created the URL in 1991 to allow the publishing of hyperlinks on the World Wide Web, a fundamental innovation in the history of the Internet. Since 1994, the URL has been subsumed into the more general Uniform Resource Identifier (URI), but URL is still a widely used term.The U in URL has always stood for Uniform, but it is sometimes described as Universal, perhaps because URI did mean Universal Resource Identifier before RFC 2396.
Hypertext is the organization of information units into connected associations that a user can choose to make. An instance of such an association is called a link or hypertext link. (And the highlighted word "link" in the previous sentence is an example of a hypertext link.) Hypertext was the main concept that led to the invention of the World Wide Web, which is, after all, nothing more (or less) than an enormous amount of information content connected by an enormous number of hypertext links. The term was first used by Ted Nelson in describing his Xanadu systemby the user (with a mouse or in some other fashion), resulting in the immediate delivery and view of another file. The highlighted object is referred to as an anchor. The anchor reference and the object referred to constitute a hypertext link. Links are what make the World Wide Web a web.
In telecommunications, a link is a physical (and, in some usages, a logical) connection between two points. Hypermedia, a term derived from hypertext, extends the notion of the hypertext link to include links among any set of multimedia objects, including sound, motion video, and virtual reality. It can also connote a higher level of user/network interactivity than the interactivity already implicit in hypertext. In electronic communication, bandwidth is the width of the range (or band) of frequencies that an electronic signal uses on a given transmission medium. In this usage, bandwidth is expressed in terms of the difference between the highest-frequency signal component and the lowest-frequency signal component. Since the frequency of a signal is measured in hertz (the number of cycles of change per second), a given bandwidth is the difference in hertz between the highest frequency the signal uses and the lowest frequency it uses. A typical voice signal has a bandwidth of approximately three kilohertz (3 kHz); an analog television (TV) broadcast video signal has a bandwidth of six megahertz (6 MHz) -- some 2,000 times as wide as the voice signal.
In computer networks, bandwidth is often used as a synonym for data transfer rate - the amount of data that can be carried from one point to another in a given time period (usually a second). This kind of bandwidth is usually expressed in bits (of data) per second (bps). Occasionally, it's expressed as bytes per second (Bps). A modem that works at 57,600 bps has twice the bandwidth of a modem that works at 28,800 bps. In general, a link with a high bandwidth is one that may be able to carry enough information to sustain the succession of images in a video presentation.
It should be remembered that a real communications path usually consists of a succession of links, each with its own bandwidth. If one of these is much slower than the rest, it is said to be a bandwidth bottleneck.
Markup language :HTML
In computing, HyperText Markup Language (HTML) is a markup language designed for the creation of web pages with hypertext and other information to be displayed in a web browser. HTML is used to structure information — denoting certain text as headings, paragraphs, lists and so on — and can be used to describe, to some degree, the appearance and semantics of a document. XHTML, which applies the stricter rules of XML to HTML to make it easier to process and maintain, is the W3C's successor to HTML.
A hyperlink, or simply a link, is a reference in a hypertext document to another document or other resource. As such it is similar to a citation in literature. Combined with a data network and suitable access protocol, a computer can be instructed to fetch the resource referenced. The term "hyperlink" was coined in 1965 (or possibly 1964) by Ted Nelson at the start of Project Xanadu.
Hyperlinks are part of the foundation of the World Wide Web created by Tim Berners-Lee.
There are a number of ways to format and present hyperlinks on a web page. An embedded link is one of the more common formats: one or more words of distinctively styled text. The ninth word of this sentence is an example of an embedded link.
How hyperlinks work in HTML
A link has two ends, called anchors, and a direction. The link starts at the source anchor and points to the destination anchor. However, the term link is often used for the source anchor, while the destination anchor is called the link target.
The most common link target is a URL used in the World Wide Web. This can refer to a document, e.g. a webpage, or other resource, or to a position in a webpage. The latter is achieved by means of a HTML element with a "name" or "id" attribute at that position of the HTML document. The URL of the position is the URL of the webpage with "#attribute name" appended.
URLs in everyday use
An HTTP URL combines into one simple address the four basic items of information necessary to retrieve a resource from anywhere on the Internet:
• the protocol to use to communicate,
• the host (server) to communicate with,
• the network port on the server to connect to,
• the path to the resource on the server (for example, its file name).
A typical URL can look like:
In the example above:
• http is the protocol,
• en.wikipedia.org is the host,
• 80 is the network port number on the server (as 80 is the default value for the HTTP protocol, this portion could have been omitted entirely),
• /wiki/Special:Search is the resource path,
• ?search=train&go=Go is the query string; this part is optional.
How the Web works
When a viewer wants to access a web page or other resource on the World Wide Web, he normally begins either by typing the URL of the page into his or her web browser, or by following a hypertext link to that page or resource. The first step, behind the scenes, is for the server-name part of the URL to be resolved into an IP address by the global, distributed Internet database known as the Domain name system or DNS.
The next step is for an HTTP request to be sent to the web server at that IP address, for the page required. In the case of a typical web page, the HTML text, graphics and any other files that form a part of the page will be requested and returned to the client (the web browser) in quick succession.
The web browser's job is then to render the page as described by the HTML, CSS and other files received, incorporating the images, links and other resources as necessary. This produces the on-screen 'page' that the viewer sees.
Most web pages will themselves contain hyperlinks to other relevant and informative pages and perhaps to downloads, source documents, definitions and other web resources.
Such a collection of useful, related resources, interconnected via hypertext links, is what has been dubbed a 'web' of information. Making it available on the Internet produced what Tim Berners-Lee first called the World Wide Web in the early 1990s.
If the user returns to a page fairly soon, it is likely that the data will not be retrieved from the source web server, as above, again. By default, browsers cache all web resources on the local hard drive. An HTTP request will be sent by the browser that asks for the data only if it has been updated since the last download. If it has not, the cached version will be reused in the rendering step.
This is particularly valuable in reducing the amount of web traffic on the internet.Apart from the facilities built into web servers that can ascertain when physical files have been updated, it is possible for designers of dynamically generated web pages to control the HTTP headers sent back to requesting users, so that pages are not cached when they should not be — for example internet banking and news pages.
The World Wide Web had a number of differences from other hypertext systems that were then available:
• The WWW required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing Web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of broken links.
• Unlike certain applications, such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions.
On April 30, 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that gopher was no longer free to use, this produced a rapid shift away from gopher and towards the Web.
The World Wide Web finally gained critical mass with the 1993 release of the graphical Mosaic web browser by the National Center for Supercomputing Applications developed by Marc Andreessen. Prior to the release of Mosaic, graphics were not commonly mixed with text in Web pages and its popularity was less than older protocols in use over the Internet, such as Gopher protocol and Wide area information server. Mosaic's graphical user interface allowed the Web to become by far the most popular Internet protocol.
At its core, the Web is made up of three standards:
• the Uniform Resource Identifier (URI), which is a universal system for referencing resources on the Web, such as Web pages;
• the HyperText Transfer Protocol (HTTP), which specifies how the browser and server communicate with each other; and
• the HyperText Markup Language (HTML), used to define the structure and content of hypertext documents.
Berners-Lee now heads the World Wide Web Consortium (W3C), which develops and maintains these and other standards that enable computers on the Web to effectively store and communicate different forms of information.
Publishing web pages
The Web is available to individuals outside mass media. In order to "publish" a web page, one does not have to go through a publisher or other media institution, and potential readers could be found in all corners of the globe.Unlike books and documents, hypertext does not have a linear order from beginning to end. It is not broken down into the hierarchy of chapters, sections, subsections, etc.Many different kinds of information are now available on the Web, and for those who wish to know other societies, their cultures and peoples, it has become easier. When travelling in a foreign country or a remote town, one might be able to find some information about the place on the Web, especially if the place is in one of the developed countries. Local newspapers, government publications, and other materials are easier to access, and therefore the variety of information obtainable with the same effort may be said to have increased, for the users of the Internet.Although some websites are available in multiple languages, many are in the local language only. Also, not all software supports all special characters, and RTL languages. These factors would challenge the notion that the World Wide Web will bring a unity to the world.The increased opportunity to publish materials is certainly observable in the countless personal pages, as well as pages by families, small shops, etc., facilitated by the emergence of free web hosting services.
The Web suffers from link rot, links becoming broken because of the continual disappearance or relocation of web resources over time. The ephemeral nature of the Web has prompted many efforts to archive the Web. The Internet Archive is one of the most well-known efforts; they have been archiving the Web since 1996.