Friday, March 15, 2013

Local DNS cachin Linux


Linux Local DNS caching using dnsmasq

Source: http://www.webupd8.org/2009/12/faster-browsing-in-linux-with-local-dns.html


FASTER BROWSING IN LINUX WITH LOCAL DNS CACHE


A local DNS cache can help for faster browsing since you’re caching the DNS request instead of attempting that request multiple times. The internet speed will not get any faster, but the browsing speed will improve, because on each website there are usually quite a few DNS requests for which the local DNS cache will be used, bringing the query time to almost 0. You can find more info about DNS, on Wikipedia.

To see how fast your current domain name servers (DNS) are, open a terminal and paste this:
dig yahoo.com

(Or dig google.com or whatever domain)

You should see something like this:
; <<>> DiG 9.6.1-P1 <<>> yahoo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42045
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;yahoo.com.   IN A

;; ANSWER SECTION:
yahoo.com.  20142 IN A 69.147.114.224
yahoo.com.  20142 IN A 209.131.36.159
yahoo.com.  20142 IN A 209.191.93.53

;; Query time: 50 msec
;; SERVER: 208.67.220.220#53(208.67.220.220)
;; WHEN: Wed Dec  9 13:21:48 2009
;; MSG SIZE  rcvd: 75

Notice the "Query time" in bold. It's usually somewhere near 50 msec. (it depends on your domain name servers).

Run this one more time. If the query time decreases to less than 5 msec, it means your internet service provider DNS already uses some caching method and you do not need to follow this how-to. If the response time is almost the same and you are using a cable (broadband) internet connection, you can use this guide to cache the DNS for faster internet browsing.

Firstly, I would like to thank embraceubuntu for this how-to, I've just made it more newbie-friendly. So the credits go to him.

Before we get started, please note that there is an easier method on doing this (by only installing (well, actually you need to edit /etc/bind/named.conf.options with your ISP DNS) resolvconf and bind9: sudo apt-get install resolvconf bind9) but in my tests, using resolvconf and bind9, the first DNS query time was 200-300 msec (maybe it needs some tweaking but I couldn't figure out why there is such a big query time the first time) and then since it was cached, it went to 0. But the method I am going to explain will get an initial query time equal to your default DNS (~50 msec for me, as opposed to 200-300 msec which I got by using resolvconf and bind9).

Let's get started!

Manually configuring the local DNS cache



1. Install DNSMasq:
sudo apt-get install dnsmasq


2. Configure dnsmasq.conf

Press Alt + F2 and type:

gksu gedit /etc/dnsmasq.conf


Now search for "listen-address" (it's on line 90 on my Ubuntu Karmic installation), remove the "#" character in front of "listen-address" and add "127.0.0.1" after the "=" (all without the quotes). Basically, this is how the "listen-address" line should look like after editing it:
listen-address=127.0.0.1


(Optional) You can also edit the cache size if you want. Search for this in the same file: "#cache-size=150" (it's on line 432 on my Ubuntu Karmic installation), remove the "#" character in front of the line (this uncomments it) and change "150" with the size you want for you DNS cache. This is how the line should look after editing it:
cache-size=500

Obviously, "500" can be any number you want.

Don't forget to save the changes!

3. Edit dhclient.conf

Press Alt + F2 and type:
gksu gedit /etc/dhcp3/dhclient.conf

For newer Ubuntu versions (tested on Oneiric), dhclient.conf has moved, so use the following command instead:
gksu gedit /etc/dhcp/dhclient.conf


And modify the "prepend domain-name-servers" (it's on line 20 on my computer) to look like this:
prepend domain-name-servers 127.0.0.1;



4. Edit resolv.conf

Press Alt + F2 and paste this:
gksu gedit /etc/resolv.conf


Initially, this is how the resolv.conf file looks like:
nameserver ISP_DNS1
nameserver ISP_DNS2

Where ISP_DNS1 and ISP_DNS2 are your ISP domain name servers (or 8.8.4.4, etc if you are using Google DNS and so on).

Put this as the first line in your resolv.conf file:
nameserver 127.0.0.1

Which means this is how your resolv.conf file will look like:
nameserver 127.0.0.1
nameserver ISP_DNS1
nameserver ISP_DNS2

Again, ISP_DNS1 and ISP_DNS2 are your ISP domain name servers.

As an example, this is how my resolv.conf file looks like (using local DNS cache, a Google DNS and an OpenDNS DNS):
nameserver 127.0.0.1
nameserver 8.8.4.4
nameserver 208.67.220.220


4.1 If you are using a DSL connection, you need to make sure the ppp client will not overwrite your /etc/resolv.conf file. To do this, press Alt + F2, and paste this:
gksu gedit /etc/ppp/peers/provider

Search for "usepeerdns" and replace it with "#usepeerdns" (we used "#" to comment that line so it's ignored).

5. Restart your networking and dnsmasq:

-Networking:
sudo /etc/init.d/networking restart


-DNS:
sudo /etc/init.d/dnsmasq restart

Please note that you can use this last command at any time you want to restart your DNS cache (flush DNS, clear the cache - call it whatever you want) without restarting the computer.

6. Testing

To see the peformance improvement, open a terminal and type:
dig yahoo.com

The first time, it should be the same like in the beginning of the post (~50 msec. for me). Now type it again! You should see something like this:
dig yahoo.com

; <<>> DiG 9.6.1-P2 <<>> yahoo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57501
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;yahoo.com.   IN A

;; ANSWER SECTION:
yahoo.com.  20982 IN A 209.131.36.159
yahoo.com.  20982 IN A 69.147.114.224
yahoo.com.  20982 IN A 209.191.93.53

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Dec  9 14:43:41 2009
;; MSG SIZE  rcvd: 75


0 msec. query time, because the domains are now cached.


Note: Using the method above, the DNS cache will be cleared once you reboot your computer. For persistent DNS caching (on the hard disk), see this excellent how-to on Ubuntu Forums.

Saturday, March 9, 2013

Smartphones full HD 1080p resolution 2013


Sony Xperia Z

  • 5.0 inches TFT capacitive touchscreen
  • Full HD support Resolution of 1080 x 1920 pixels
  • Protection layer of Shatter proof and scratch-resistant glass and possess Sony Mobile BRAVIA Engine 2


Sony Xperia ZL

  • 5.0 inch TFT capacitive touchscreen
  • Resolution of 1080 x 1920 pixels
  • Protection layer of Shatter proof and scratch-resistant glass and possess Sony Mobile BRAVIA Engine 2


ZTE Grand S

  • 5 inch full-HD screen
  • Resolution of 1920 x 1080 pixels


Lenovo K900

  • 5.0 inch IPS LCD touchscreen
  • Resolution of 1920 x 1080 pixels


Huawei Ascend D2

  1. 5.0 inch IPS LCD capacitive touchscreen
  2. Resolution of 1080 x 1920 pixels
  3. Protection layer of Corning Gorilla Glass



HTC Droid DNA

  1. 5.0 inch Super LCD3 capacitive touchscreen
  2. Resolution of 1080 x 1920 pixels
  3. Protection layer of Corning Gorilla Glass 2



Samsung Galaxy S4

  1. 5.0 inch Super AMOLED capacitive touchscreen
  2. Resolution of 1080 x 1920 pixels
  3. Protection layer of Corning Gorilla Glass 2



HTC Butterfly

  1. 5.0 inch Super LCD3 capacitive touchscreen
  2. Resolution of 1080 x 1920 pixels
  3. Protection layer of Corning Gorilla Glass 2

Sunday, March 3, 2013

International shipping to India





Services



1. iShopInternational.com


Description source: http://forum.xda-developers.com/showpost.php?p=36983040&postcount=2

Try ishopinternational.com
You have to provide them with amazon link and they will give you quotation price which includes product cost from amazon, shipping to india and customs (<10%). You have to pay them in their Indian account and they will purchase from amazon and deliver to you at your doorstep. You can use coupon code amazon200 or amazon250 to get 250 rs discount.


2. www.ppobox.com

Friday, March 1, 2013

Android Development resources



References -

  1. http://mobileorchard.com/android-app-developmentthreading-part-1-handlers/
  2. http://mobileorchard.com/android-app-developmentthreading-part-2-async-tasks/

My AVD commandline

"-scale 0.6 -qemu -m 512 -enable-kvm"

Wednesday, February 27, 2013

Android emulator taking too much of screen size?


Source: http://stackoverflow.com/questions/2359895/android-emulator-screen-too-tall

(Little modified)

Using AVD Manager

  1. Open AVD Manager 
    1. If using eclipse then Go to Window -> Android SDK and AVD Manager -> Virtual Devices
  2. Select the AVD you want to launch and click Start
  3. Check the "Scale display to real size" button
  4. Enter how big you want it to appear in inches and press Launch. For this to work, you'll have to also enter a reasonable approximation of your mac's screen resolution. I'm using 7 inches and 113 dpi for my 13" Macbook Pro, but you may be able to get away with 8 or 9 inches.

While debugging (add this to command line)

Source: http://stackoverflow.com/questions/2359895/android-emulator-screen-too-tall/4963984#4963984

This is actually possible from your project as well, no need to start the emulator through the AVD manager:
1) go to Run > Run Configurations... > (Select your application on the left hand side) > (Click the "Target" tab on the right hand side). 2) At the bottom there, you'll see 'Emulator launch parameters'. In the 'additional emulator command line options', add '-scale 0.75' (to make the screen 75% of full size)
Next time you start the emulator it will have scaled properly, hooray!

Change when the Emulator is running

Source: http://stackoverflow.com/a/6049246


There is also a way to re size the emulator through a windows command prompt.
  1. From command prompt run: telnet localhost 5554
  2. window scale 0.75
  3. quit
Assuming there is one emulator running with on port 5554.

Sunday, February 17, 2013

How to Do a Clean Install of Windows 8 with an Upgrade Disc


How to Do a Clean Install of Windows 8 with an Upgrade Disc:


Sometimes, you just need to do a clean install. Unfortunately, the Windows 8 Upgrade doesn't always allow for that, throwing you an error when you try to activate after a clean install. Reader uncommoner shows us a workaround for this issue.
If you do a clean install using the Windows 8 Upgrade Assistant, you should be fine—but if you've already formatted your drive or you're moving to a new drive, you can't do a "clean install" without installing an old version of Windows first. It'll let you install Windows 8 cleanly, but when you go to activate, you get an error 0x8007007B, saying your product key can only be used for upgrading.
If you get that error, here's how to fix it:
  1. Press the Windows key and type regedit. Press enter to open the Registry Editor.
  2. Navigate toHKEY_LOCAL_MACHINE/Software/Microsoft/Windows/CurrentVersion/Setup/OOBE/and double-click on the MediabootInstall key in the right pane.
  3. Change the key's value from 1 to 0.
  4. Exit the Registry Editor, press the Windows key again, and type cmd. Right-click on the Command Prompt icon and run it as an administrator.
  5. Type slmgr /rearm and press Enter.
  6. Reboot Windows.
When you get back into Windows, you should be able to run the Activation utility and activate Windows as normal, without getting an error. Obviously, you could use this trick for evil, but it has its legitimate place too—if, say, you're upgrading your hard drive and want to do a fresh install on it, or if you formatted your drive before upgrading.
We haven't had a chance to test it ourselves, but it's been well documented around the net, so we're confident it should work for you if you're getting this particular error. If you give it a shot, let us know how it works for you in the discussions below! Thanks for the tip, uncommoner!



Saturday, January 26, 2013

Cannot delete files - You'll need to provide administrator permission to delete this folder



Source: http://forum.thewindowsclub.com/windows-tips-tutorials-articles/18379-how-take-ownership-full-control-permissions-files-folders-windows.html


How to Take Ownership and Full Control Permissions of Files & Folders in Windows


A lot of files and folders in Windows 7 & Vista does not actually belongs to users. Rather, most system files have “Trusted Installer” as owner, the assign or grant read+write, traverse or full control permissions to SYSTEM or CREATOR OWNER user account only. So users must take ownership and grant full access control permissions and rights to themselves if they want to modify, rename or delete these files or folders. Sometimes, users may need to take ownership and grant full rights to themselves on another drive or partition, especially on disk newly installed or inserted if they cannot browse the contents from the drive.

To take ownership and grant full control (or read write) permissions of files or folders in Windows Vista, do these steps.

1. In Windows Explorer window, locate the files or folders that you want to take ownership and grant or change full control or other access permissions.
2. Right click on the file or directory, and then select Properties on the right click menu.



3. Click on Security tab.
4. Click on Advanced button at the bottom. 



5. In “Advanced Security Settings” dialog window, click on Owner tab.
6. Here you will be able to see current owner (i.e. TrustedInstaller). To take ownership of the object, click on the Edit button. If UAC prompts for administrator’s password or permission to continue, enter the correct password or press Continue button.



7. Additional “Advanced Security Settings” dialog will appear. Highlight the user name (for example, Administrators) in the Change owner to box that you want to assign as the owner for the object. Click OK to make the change.



8. Back in original parent level “Advanced Security Settings” window, you will see the existing owner of the file or folder has changed to the user you just selected.
9. Click OK button to exit this window.
10. Click OK again to exit completely from the Properties window.
11. The ownership is now belonged to user or user account that been selected. To assign necessary permissions to the user too, repeat step 1 to 3 to open the object’s Properties window again.
12. In object’s Properties window, click on Edit button to change permissions. If UAC prompts for administrator’s password or permission to continue, enter the correct password or press Continue button.



13. Highlight the Administrators or the user who wants the permissions on the object be changed in the “Group or user names” box.

If the user ID or group that you want to manage the permissions for the object doesn’t exist, click on Add button, and type in the user name or group name desired into the Enter object names to select box, and finish off by clicking on OK.
14. In the Permissions for Administrators box below (or any other user name or group name you chose), click on “Full Control” under the “Allow” column to assign full access rights control permissions to Administrators group.



15. Click “OK” twice when done.

Users can now do whatever you like to the files or directories processed as above. If you feel that above process is a little too long, and prefer to use command line to perform above process, then open an elevated command prompt as administrator, and issues the following commands:

For Files:

takeown /f file_name /d y
icacls file_name /grant administrators:F


For Folders or Directories (will perform action recursively):

takeown /f directory_name /r /d y
icacls directory_name /grant administrators:F /t


Replace file_name or directory_name with actual file name or folder name, with path when applicable. The first command will take ownership of the file or folder specified, and the second command will grant full control permissions to administrators user group. Note that when using command for folders, to command will run recursively. To prevent the task been perform recursively, remove the “/r” and “/t” switch.

Above two commands have been scripted in Windows command shell batch script that easily perform the task of taking ownership and grant full control permissions to Administrators user group by typing simple command. Alternatively, users can add “Take Control Of” option to right clickmenu so that the next time you need to take control of a file with full control permissions, it’s just a one click task.

Monday, January 21, 2013

HDInsight mapreduce - Hadoop API for .NET


Taken from: http://hadoopsdk.codeplex.com/wikipage?title=Getting%20Started%20With%20Map%20Reduce&referringTitle=Map%2fReduce



Hadoop API for .NET
=====================

Introduction

----------
Hadoop Streaming is a facility for writing map-reduce jobs in the language of you choice. Hadoop API for .NET is a wrapper to Streaming that provides a convenient experience for .NET developers. An understanding of the concepts and general functionality provided by Hadoop Streaming is necessary for successful use of this API: see http://hadoop.apache.org/common/docs/r0.20.0/streaming.html for this background information.

The main facilities provided by this API are:

1. Abstraction of job execution to avoid manual construction of streaming command-line.

2. Mapper, Reducer, Combiner base classes and runtime wrappers that provide helpful abstractions. For example, the ReducerCombinerBase class provides input through (string key, IEnumerable<string> value) groups.

3. Detection of .NET dependencies and automatic inclusion in streaming job.

4. Local unit-testing support for map/combine/reduce classes via StreamingUnit class

5. Support for JSON I/O and strongly typed mapper/combiner/reducer via Json* classes. The pattern used by the JSON classes can be used to create other serialization wrappers.

When jobs are submitted via the API a Hadoop Streaming command is generated and executed. The command is displayed on the console and can be used for direct invocation if required.

Input & Output formats

--------------------

The input/output format supported is line-oriented tab-separated records, staged in a Hadoop-supported file system such as HDFS or Azure Blob Storage. The input may comprise many files but each should have a consistent format: records delimited by \n\r, columns delimited by \t.

When a job comprises both a mapper and reducer, the key values emitted by the mapper must be plain text that can be sorted successfully with an ordinal-text-comparer such as provided by .NETs `StringComparison.Ordinal`.

In all other cases the record fields may comprise formatted text such as Json or other text representation of structured data. The API includes support for Json fields via the classes in the `Microsoft.Hadoop.MapReduce.Json` namespace.

If data is in a binary format or document-oriented format (such as a folder full of .docx files), the input to a map-reduce job will typically be files that list the path to each real file, one path per line. The mapper can then look up the files using whatever API is appropriate.


Example Map-Reduce program

------------------------

A .NET map-reduce 'program' comprises a number of parts:

1. A job definition. This declares the `MapperType`, `ReducerType`, `CombinerType` and configuration settings

2. Mapper, Reducer and Combiner classes

4. Input data. Typically staged to HDFS or Azure Storage prior to job execution. The most common approach is via the Hadoop file-system utility. For example,

> hadoop fs -copyFromLocal localFile input/folder/file 

5. A job-executor. This can either be the `MRRunner.exe` that is part of the API distribution, or by creating a Main() function in your .NET application that invokes `HadoopJobExecutor`.

To create a mapper-only job:

1. Create a new C# project and reference `Microsoft.Hadoop.MapReduce.DLL`.

2. Create a class the implements `HadoopJob<FirstMapper>`.

3. Create a class called `FirstMapper` that implements `MapperBase`.

For example, the following is a complete map-reduce 'program' that consumes files containing integers and produces output that includes `sqrt(x)` for each input value.

    public class FirstJob : HadoopJob<SqrtMapper>
    {
        public override HadoopJobConfiguration Configure(ExecutorContext context)
        {
            HadoopJobConfiguration config = new HadoopJobConfiguration();
            config.InputPath = "input/SqrtJob";
            config.OutputFolder = "output/SqrtJob";
            return config;
        }
    }

    public class SqrtMapper : MapperBase
    {
        public override void Map(string inputLine, MapperContext context)
        {
            int inputValue = int.Parse(inputLine);

            // Perform the work.
            double sqrt = Math.Sqrt((double)inputValue);

            // Write output data.
            context.EmitKeyValue(inputValue.ToString(), sqrt.ToString());
        }
    }

To run this program, stage some data in HDFS:

1. create a text file called input.txt that has one integer per line.

2. import that text file to HDFS via

> hadoop fs -copyFromLocal input.txt input/SqrtJob/input.txt

3. compile your .NET code to a DLL called FirstJob.dll and run it via

> MRRunner -dll FirstJob.dll 

When this runs, the console will display the complete Hadoop Streaming command issued and then the normal console output from the hadoop streaming command itself.

To see detailed information about the execution of current and past jobs, use the Hadoop streaming web front-end, typically accessible at http://localhost:50030.

To explore the HDFS filesystem, use the HDFS web front-end that is typically accessible at http://localhost:50080.

When the job completes, output will be available in HDFS at `/user/user/output/SqrtJob`.

HadoopJob class

-------------

The `Job<>` class defines the users components that are included in a map-reduce program. Generic parameters are used to declare the Mapper, Combiner and Reducer classes that should be used by the job. Configuration parameters are supplied by overriding the method

public abstract HadoopJobConfiguration Configure(ExecutorContext context);

To implement this method, instantiate a `HadoopJobConfiguration` object then set its members and return.


HadoopJobConfiguration class

--------------------------

The configuration for a job is a strongly typed bag of settings that are largely passed directly to the hadoop streaming command-line. Some settings result in non-trivial settings but most are straight forward. Only a subset of Hadoop Streaming settings are directly exposed through the Configuration object. All settings are usable through catch-all factilities: `config.AdditionalStreamingArguments` and `config.AdditionalGenericArguments`.

HadoopJobExectuor class

---------------------

HadoopJobExecutor handles the creation and execution of a complete Hadoop Streaming command-line. It can be called in various ways. The first is to use the MRRunner.exe utility which will invoked HadoopJobExecutor on your behalf

> MRRunner -dll MyMRProgram.dll {-class jobClass} {-- job-class options} 

The second is to invoke the executor directly and request it execute a HadoopJob

HadoopJobExecutor.Execute<JobType>(arguments)

A third approach is to avoid the JobType and just invoke directly mentioning MapperType etc and a Configuration object

HadoopJobExecutor.Execute<TMapper,..>(configuration) 

MRRunner

------
MRRunner is a command-line utility used to execute a map-reduce program written against the Hadoop for .NET API. To get started, you should have an assembly (a .net DLL or EXE) that defines at least one implementation of HadoopJob<>.

If MyDll contains only one implementation of Hadoop<>, you can run the job with

> MRRunner -dll MyDll 

If MyDll contains multiple implementations of HadoopJob<>, indicate the one you wish to run

> MRRunner -dll MyDll -class MyClass 

To supply options to your job, pass them as trailing arguments on the command-line, after a double-hyphen

> MRRunner -dll MyDll -class MyClass -- extraArg1 extraArg2 

These additional arguments are provided your your job via a context object that is available to all methods on HadoopJob<>

MapperBase

--------

A MapperBase implementation describes how to perform the Map function. The input to Map will be a subset of the rows of the input. On each call to the Mapper.Map(string input, MapperContext context) method, a single line will be provided as input. The Map method can make use of the context object to lookup relevant setting, emit output lines and emit log messages and counter updates.

For example:
    public class MyMapper : MapperBase {
        public override void Map(string inputLine, MapperContext context)
        {
            context.Log("mapper called. input=" + inputLine);
            context.IncrementCounter("mapInputs");
            context.EmitKeyValue(key, value);
        }
    }
   

The MapperBase class also provides overridable methods to be run at the start/end of each batch. These methods can be used to perform set up and teardown such as initialising a component.


ReducerCombinerBase

-----------------

A ReducerCombinerBase implementation describes how to perform a reduce and/or combine operation. In each case the operation takes a group and emits key/value pairs that typically represent an aggregated representation of the group. For example, the input to a reducer may be: key = 'a', values = 1,2,3 and the output might be {'a',6}. To implement `ReducerCombinerBase`, override the `Reduce(key, values, context)` method and use the context object to emit key/value pairs as necessary.

A common requirement for a map-reduce program is to reuse one reduce function as both the reducer and the combiner. This is achieved by referencing the same reducer class when declaring a HadoopJob class.
    public class MyHadoopJob<MyMapper, MyReducer, MyReducer> {
        ...
    }


Json support

----------

The primary data format for Hadoop Streaming is line-oriented text and so the normal currency of Map and Reduce implementations is System.String. It can often be convenient to transform the strings to/from .NET objects and this requires a serialization mechanism. A set of classes that use Json.NET as the serialization engine are provided in the Microsoft.Hadoop.MapReduce.Json namespace. As an example of their use, consider input data that has JSON format values:
    {ID=2, Name="Alan"}
    {ID=3, Name="Bob"}

Further, assume that a class definition that can represent the values is
    public Employee {
        public int ID {get;set;}
        public string Name {get;set;}
    }
   

The Json Mapper classes can help perform the deserialization and transformation to Employee instances that is required for convenient processing. Let's assume the output of the Mapper will be simple strings; in this case the appropriate Mapper type to use is JsonInMapperBase<>. For example:
    public class MyMapper : JsonInMapperBase<Employee> {
        public override void Map(Employee value, MapperContext context){
        
        }
    }
   

JsonInMapperBase performs the deserialization of the input lines and the instantiation of Employee objects. The Map function that must be implemented can deal with Employee inputs rather than strings.

Other classes in `Microsoft.Hadoop.MapReduce.Json` support transferring object-representations between mapper and reducer and as the output of the reducer.
Last edited Oct 25, 2012 at 8:09 PM by mikelid, version 2