Friday, April 18, 2008

Deep linking

The technology behind the World Wide Web, the Hypertext Transfer Protocol (HTTP), does not actually make any distinction between "deep" links and any other links—all links are functionally equal. This is intentional; one of the designed purposes of the Web is to allow authors to link to any published document on another site. The possibility of so-called "deep" linking is therefore built into the Web technology of HTTP and URLs by default—while a site can attempt to restrict deep links, to do so requires extra effort. According to the World Wide Web Consortium Technical Architecture Group, "any attempt to forbid the practice of deep linking is based on a misunderstanding of the technology, and threatens to undermine the functioning of the Web as a whole". One way to prevent deep linking is to configure the web server to check the referring URL using a Rewrite engine. [1]

Usage

Some commercial websites object to other sites making deep links into their content either because it bypasses advertising on their main pages, passes off their content as that of the linker or, like The Wall Street Journal, they charge users for permanently-valid links. Sometimes, deep linking has led to legal action such as in the 1997 case of Ticketmaster versus Microsoft, where Microsoft deep-linked to Ticketmaster's site from its Sidewalk service. This case was settled when Microsoft and Ticketmaster arranged a licensing agreement. Ticketmaster later filed a similar case against Tickets.com, and the judge in this case ruled that such linking was legal as long as it was clear to whom the linked pages belonged .[2] The court also concluded that URL's themselves were not copyrightable, writing: "A URL is simply an address, open to the public, like the street address of a building, which, if known, can enable the user to reach the building. There is nothing sufficiently original to make the URL a copyrightable item, especially the way it is used. There appear to be no cases holding the URLs to be subject to copyright. On principle, they should not be."

Deep Linking and Rich Web Technologies

Websites which are built on rich web technologies such as Adobe Flash and AJAX often do not support deep linking. This can result in usability problems for people visiting such websites. For example, visitors to these websites may be unable to save bookmarks to individual pages or states of the site, web browser forward and back buttons may not work as expected, and use of the browser's refresh button may return the user to the initial page.

Recently, a software library has been built for Adobe Flash which provides all of the above deep-linking and web browser navigation features, called SWFAddress. Such features would otherwise be very difficult to implement without the library, although very important for websites built wholly using Adobe Flash. Deep Linking and Web Browser Navigation for Flash.

However, this is not a fundamental limitation of these technologies. Well-known techniques now exist that website creators using Flash or AJAX can use to provide deep linking to pages within their sites.

Link bait in search engine optimization

The quantity and quality of inbound links are two of the many metrics used by a search engine ranking algorithm to rank a website. Link bait creation falls under the task of link building, and aims to increase the quantity of high-quality, relevant links to a website. Part of successful linkbaiting is devising a mini-PR campaign around the release of a link bait article so that bloggers and social media users are made aware and can help promote the piece in tandem. Social media traffic can generate a substantial amount of links to a single web page. Sustainable link bait is rooted in quality content.

Free for All linking

(FFA) link page is a web page set up ostensibly to improve the search engine placement of a particular web site. Webmasters typically will use software to place a link to their site on hundreds of FFA sites, hoping that the resulting incoming links will increase the ranking of their site in search engines. Experts in SEO techniques do not place much value on FFAs. First, most FFAs only maintain a small number of links for a short time, too short for most search engines to pick up. Second, the high "human" traffic to FFA sites is almost completely other webmasters visiting the site to place their own links manually. Finally, search engine algorithms count more than link numbers, they also check relevancy which the unrelated links on FFA sites do not have. Another drawback to FFAs is the amount of e-mail spam webmasters will receive from members of the FFA. Using an FFA can be considered a form of spamdexing.