Skip to main content

Few tips for speedier browsing.

How often have you lost your temper when you are searching a forum's thread for a "how-to-fix" the broken piece of electronics your best mate gifted you on your birthday? Or trying to find a nice greeting card on Google Image search, clicking each thumbnail and exhausting yourself in back and forth browsing exercise? Quite often, right?

If you browse the internet, you have experienced the annoyance that multi-page galleries, reviews, and sometimes even Google search creates. By multi-page content, I mean a single piece of information, which is broken into multiple pages, mostly for advertisements that the viewer hates.

This small article should help you speed up your browsing. But your browser should be Google Chrome or Mozilla Firefox (honestly, I haven't tried other browsers).

The Solution


  1. Fastest Chrome/Fastest Fox: automatically loads next pages whenever possible, when you scroll to the bottom of the web-page. This saves you from trying to locate the next button or counter and clicking it. Not only that, you can have quick Wikipedia definitions on simply selecting a word or phrase without having to open a new tab.


  2. Hover Zoom/Thumbnail Zoom: hover your mouse over any image on the webpage and it will most likely magnify, i.e. if the image is actually a low-resolution or thumbnail for preview. This again, saves you from opening the link of the original image.

I have been using these couple of extensions for years now and enjoying an incredibly fast browsing experience.



Please leave a comment if you like or dislike. Corrections are very much appreciated...

-
Owais

Comments

Popular posts from this blog

Executing MapReduce Applications on Hadoop (Single-node Cluster) - Part 1

Okay. You just set up Hadoop on a single node on a VM and now wondering what comes next. Of course, you’ll run something on it, and what could be better than your own piece of code? But before we move to that, let’s first try to run an existing program to make sure things are well set on our Hadoop cluster.
Power up your Ubuntu with Hadoop on it and on Terminal (Ctrl+Alt+T) run the following command: $ start-all.sh
Provide the password whenever asked and when all the jobs have started, execute the following command to make sure all the jobs are running: $ jps
Note: The “jps” utility is available only in Oracle JDK, not Open JDK. See, there are reasons it was recommended in the first place.
You should be able to see the following services: NameNode SecondaryNameNode DataNode JobTracker TaskTracker Jps


We'll take a minute to very briefly define these services first.
NameNode: a component of HDFS (Hadoop File System) that manages all the file system metadata, links, trees, directory structure, etc…

A faster, Non-recursive Algorithm to compute all Combinations of a String

Imagine you're me, and you studied Permutations and Combinations in your high school maths and after so many years, you happen to know that to solve a certain problem, you need to apply Combinations.

You do your revision and confidently open your favourite IDE to code; after typing some usual lines, you pause and think, then you do the next best thing - search on Internet. You find out a nice recursive solution, which does the job well. Like the following:

import java.util.ArrayList;
import java.util.Date;

public class Combination {
   public ArrayList<ArrayList<String>> compute (ArrayList<String> restOfVals) {
      if (restOfVals.size () < 2) {
         ArrayList<ArrayList<String>> c = new ArrayList<ArrayList<String>> ();
         c.add (restOfVals);
         return c;
      }
      else {
         ArrayList<ArrayList<String>> newList = new ArrayList<ArrayList<String>> ();
         for (String o : restOfVals) {
            A…

Titanic: A case study for predictive analysis on R (Part 4)

Working with titanic data set picked from Kaggle.com's competition, we predicted the passenger survivals with 79.426% accuracy in our previous attempt. This time, we will try to learn the missing values instead of setting trying mean or median. Let's start with Age.

Looking at the available data, we can hypothetically correlate Age with attributes like Title, Sex, Fare and HasCabin. Also note that we previous created variable AgePredicted; we will use it here to identify which records were filled previously.

> age_train <- dataset[dataset$AgePredicted == 0, c("Age","Title","Sex","Fare","HasCabin")]
>age_test <- dataset[dataset$AgePredicted == 1, c("Title","Sex","Fare","HasCabin")]
>formula <- Age ~ Title + Sex + Fare + HasCabin
>rp_fit <- rpart(formula, data=age_train, method="class")
>PredAge <- predict(rp_fit, newdata=age_test, type="vector")
&…