Component 11 - Key Definitions Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

HTML, CSS and JavaScript. See appendix 5e.

A

See appendix 5 for details of the level of HTML, CSS and Javascript you need to know. I recommend the W3C Schools tutorials for your revision here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Search engine indexing

A

In order to allow you to search the WWW, a search engine must first know about the web pages that exist, what they are about and have some way of categorising those pages. The process of finding, exploring and storing data regarding web pages is called “indexing.”
To index pages, a search engine will use a program called a “crawler.” A crawler is simply a program which visits websites, finds all the hyperlinks on that page and then visits those. It is obvious that this process rapidly grows in complexity and time requirements. As it visits each page it will store information it finds (meta data) in the index.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

PageRank algorithm

A

Page rank is specific to Google. The initial idea came from a simple principle – The more times a website is linked to, the more important it must be. Strangely, this concept had not been explored before Larry Page and Sergey Brin attempted it, and it obviously turned out quite well. The principle was expanded to include adding weighting to the quality of website that was linking to a site and other factors that could affect its “reputation.” All of these factors are boiled down to return an indication of where each page should rank when searched for

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Server and client side processing

A

Nearly all modern web pages and applications rely on some form of processing to generate dynamic or responsive content. There are two choices for this processing – let it take place on the client machine or do everything on the server.
Server side processing has the advantage that it is more secure and less prone to tampering. However, there is the obvious hardware demands/requirements that come with this. The more clients there are, the bigger the demand on the server will be. Server side processing is essential for websites that include things like shopping carts, payments, profiles and so forth.
Client side processing is suitable for things like graphical effects, automatic updating of content and interactive features that don’t rely on the retrieval of data (simple games for example). Client side processing has the advantage of removing the demand on the web server, however, because there is no guarantee of the type of device a client is running, then the code may run slowly, give variable performance or not even run at all due to incompatibilities (such as wrong browser or script blocking)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly