Q1. What is Hadoop framework?
Hadoop framework provides a facility to store large and large amounts of data with almost no breakdown while querying. It breaks the file into pieces, copies it multiple times (3 default) and stores it on different machines. Accessibility is ensured even if any machine breaks down or is thrown out from network.
Hadoop framework provides a facility to store large and large amounts of data with almost no breakdown while querying. It breaks the file into pieces, copies it multiple times (3 default) and stores it on different machines. Accessibility is ensured even if any machine breaks down or is thrown out from network.
One can use Map Reduce programs to access and
manipulate the data. The developer need not worry where the data is stored,
he/she can reference the data from a single view provided from the Master Node
which stores all metadata of all the files stored across the cluster.
Q2. On What concept the Hadoop framework works?
Works based on Map-Reduce , which splits the problem into small chunks and solved individually and finally results will consolidated at end.
Works based on Map-Reduce , which splits the problem into small chunks and solved individually and finally results will consolidated at end.
Q3. What is HDFS?
HDFS, the Hadoop Distributed File System, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes), and provide high-throughput access to this information. Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications.
HDFS, the Hadoop Distributed File System, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes), and provide high-throughput access to this information. Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications.
HDFS is a block-structured file system:
individual files are broken into blocks of a fixed size. These blocks are
stored across a cluster of one or more machines with data storage capacity.
Individual machines in the cluster are referred to as DataNodes.
Q4. What is MAP REDUCE?
MapReduce is a programming model for processing large data sets, and the name of an implementation of the model by Google. MapReduce is typically used to do distributed computing on clusters of computers. The model is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms. MapReduce libraries have been written in many programming languages. A popular free implementation is Apache Hadoop.
MapReduce is a programming model for processing large data sets, and the name of an implementation of the model by Google. MapReduce is typically used to do distributed computing on clusters of computers. The model is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms. MapReduce libraries have been written in many programming languages. A popular free implementation is Apache Hadoop.
Overview
MapReduce is a framework for processing
embarrassingly parallel problems across huge datasets using a large number of
computers (nodes), collectively referred to as a cluster (if all nodes are on
the same local network and use similar hardware) or a grid (if the nodes are
shared across geographically and administratively distributed systems, and use
more heterogenous hardware). Computational processing can occur on data stored
either in a filesystem (unstructured) or in a database (structured). MapReduce
can take advantage of locality of data, processing data on or near the storage
assets to decrease transmission of data.
"Map"
step: The master node takes the input, divides it
into smaller sub-problems, and distributes them to worker nodes. A worker node
may do this again in turn, leading to a multi-level tree structure. The worker
node processes the smaller problem, and passes the answer back to its master
node.
"Reduce"
step: The master node then collects the answers to
all the sub-problems and combines them in some way to form the output – the
answer to the problem it was originally trying to solve.
Q7. What is the
InputFormat ?
The InputFormat defines how to read data from a file into the Mapper instances.
Hadoop comes with several implementations of InputFormat; some work with text
files and describe different ways in which the text files can be interpreted.
Others, like SequenceFileInputFormat, are purpose-built for reading particular
binary file formats.
More
powerfully, you can define your own InputFormat implementations to format the
input to your programs however you want. For example, the default
TextInputFormat reads lines of text files. The key it emits for each record is
the byte offset of the line read (as a LongWritable), and the value is the
contents of the line up to the terminating \'\\n\' character (as a Text
object). If you have multi-line records each separated by a $ character, you
could write your own InputFormat that parses files into records split on this
character instead.
Q8. What is the typical
block size of an HDFS block?
Default block size is 64mb. But 128mb is typical.
Q9. What is a Name Node?
The Name Node holds all
the file system metadata for the cluster and oversees the health of Data Nodes
and coordinates access to data. The Name
Node is the central controller of HDFS.
It does not hold any cluster data itself. The Name Node only knows what blocks make up
a file and where those blocks are located in the cluster. The Name Node points Clients to the Data
Nodes they need to talk to and keeps track of the cluster’s storage capacity,
the health of each Data Node, and making sure each block of data is meeting the
minimum defined replica policy.
Data Nodes send
heartbeats to the Name Node every 3 seconds via a TCP handshake, using the same
port number defined for the Name Node daemon, usually TCP 9000. Every tenth heartbeat is a Block Report,
where the Data Node tells the Name Node about all the blocks it has. The block reports allow the Name Node build
its metadata and insure (3) copies of the block exist on different nodes, in
different racks.
The Name Node is a
critical component of the Hadoop Distributed File System (HDFS). Without it, Clients would not be able to
write or read files from HDFS, and it would be impossible to schedule and
execute Map Reduce jobs. Because of
this, it’s a good idea to equip the Name Node with a highly redundant
enterprise class server configuration; dual power supplies, hot swappable fans,
redundant NIC connections, etc
Q10. What is a Secondary
Name Node?
Hadoop has server role
called the Secondary Name Node. A common
misconception is that this role provides a high availability backup for the
Name Node. This is not the case.
The Secondary Name Node
occasionally connects to the Name Node (by default, ever hour) and grabs a copy
of the Name Node’s in-memory metadata and files used to store metadata (both of
which may be out of sync). The Secondary
Name Node combines this information in a fresh set of files and delivers them
back to the Name Node, while keeping a copy for itself.
Should the Name Node
die, the files retained by the Secondary Name Node can be used to recover the
Name Node. In a busy cluster, the
administrator may configure the Secondary Name Node to provide this
housekeeping service much more frequently than the default setting of one
hour. Maybe every minute.
Q11. What is a Data Node?
Data Node is what where
actual data resides in the Hadoop HDFS system. For the same Meta info is
maintained at Name node, which chunk is in which node.
There are some cases in
which a Data Node daemon itself will need to read a block of data from
HDFS. One such case is where the Data
Node has been asked to process data that it does not have locally, and
therefore it must retrieve the data from another Data Node over the network
before it can begin processing.
This is another key
example of the Name Node’s Rack Awareness knowledge providing optimal network behaviour. When the Data Node asks the Name Node for
location of block data, the Name Node will check if another Data Node in the
same rack has the data. If so, the Name
Node provides the in-rack location from which to retrieve the data. The flow does not need to traverse two more
switches and congested links find the data in another rack. With the data retrieved quicker in-rack, the
data processing can begin sooner, and the job completes that much faster.
Q12. What are the Hadoop Server Roles?
The three major categories of machine roles in a Hadoop deployment are Client machines, Masters nodes, and Slave nodes. The Master nodes oversee the two key functional pieces that make up Hadoop: storing lots of data (HDFS), and running parallel computations on all that data (Map Reduce). The Name Node oversees and coordinates the data storage function (HDFS), while the Job Tracker oversees and coordinates the parallel processing of data using Map Reduce. Slave Nodes make up the vast majority of machines and do all the dirty work of storing the data and running the computations. Each slave runs both a Data Node and Task Tracker daemon that communicate with and receive instructions from their master nodes. The Task Tracker daemon is a slave to the Job Tracker, the Data Node daemon a slave to the Name Node.
Client machines have Hadoop installed with all the cluster settings, but are neither a Master or a Slave. Instead, the role of the Client machine is to load data into the cluster, submit Map Reduce jobs describing how that data should be processed, and then retrieve or view the results of the job when its finished. In smaller clusters (~40 nodes) you may have a single physical server playing multiple roles, such as both Job Tracker and Name Node. With medium to large clusters you will often have each role operating on a single server machine.
In real production clusters there is no server virtualization, no hypervisor layer. That would only amount to unnecessary overhead impeding performance. Hadoop runs best on Linux machines, working directly with the underlying hardware. That said, Hadoop does work in a virtual machine. That’s a great way to learn and get Hadoop up and running fast and cheap. I have a 6-node cluster up and running in VMware Workstation on my Windows 7 laptop.
The InputFormat defines how to read data from a file into the Mapper instances. Hadoop comes with several implementations of InputFormat; some work with text files and describe different ways in which the text files can be interpreted. Others, like SequenceFileInputFormat, are purpose-built for reading particular binary file formats.
More
powerfully, you can define your own InputFormat implementations to format the
input to your programs however you want. For example, the default
TextInputFormat reads lines of text files. The key it emits for each record is
the byte offset of the line read (as a LongWritable), and the value is the
contents of the line up to the terminating \'\\n\' character (as a Text
object). If you have multi-line records each separated by a $ character, you
could write your own InputFormat that parses files into records split on this
character instead.
Q8. What is the typical
block size of an HDFS block?
Default block size is 64mb. But 128mb is typical.
Default block size is 64mb. But 128mb is typical.
Q9. What is a Name Node?
The Name Node holds all
the file system metadata for the cluster and oversees the health of Data Nodes
and coordinates access to data. The Name
Node is the central controller of HDFS.
It does not hold any cluster data itself. The Name Node only knows what blocks make up
a file and where those blocks are located in the cluster. The Name Node points Clients to the Data
Nodes they need to talk to and keeps track of the cluster’s storage capacity,
the health of each Data Node, and making sure each block of data is meeting the
minimum defined replica policy.
Data Nodes send
heartbeats to the Name Node every 3 seconds via a TCP handshake, using the same
port number defined for the Name Node daemon, usually TCP 9000. Every tenth heartbeat is a Block Report,
where the Data Node tells the Name Node about all the blocks it has. The block reports allow the Name Node build
its metadata and insure (3) copies of the block exist on different nodes, in
different racks.
The Name Node is a
critical component of the Hadoop Distributed File System (HDFS). Without it, Clients would not be able to
write or read files from HDFS, and it would be impossible to schedule and
execute Map Reduce jobs. Because of
this, it’s a good idea to equip the Name Node with a highly redundant
enterprise class server configuration; dual power supplies, hot swappable fans,
redundant NIC connections, etc
Q10. What is a Secondary
Name Node?
Hadoop has server role
called the Secondary Name Node. A common
misconception is that this role provides a high availability backup for the
Name Node. This is not the case.
The Secondary Name Node
occasionally connects to the Name Node (by default, ever hour) and grabs a copy
of the Name Node’s in-memory metadata and files used to store metadata (both of
which may be out of sync). The Secondary
Name Node combines this information in a fresh set of files and delivers them
back to the Name Node, while keeping a copy for itself.
Should the Name Node
die, the files retained by the Secondary Name Node can be used to recover the
Name Node. In a busy cluster, the
administrator may configure the Secondary Name Node to provide this
housekeeping service much more frequently than the default setting of one
hour. Maybe every minute.
Q11. What is a Data Node?
Data Node is what where
actual data resides in the Hadoop HDFS system. For the same Meta info is
maintained at Name node, which chunk is in which node.
There are some cases in
which a Data Node daemon itself will need to read a block of data from
HDFS. One such case is where the Data
Node has been asked to process data that it does not have locally, and
therefore it must retrieve the data from another Data Node over the network
before it can begin processing.
This is another key
example of the Name Node’s Rack Awareness knowledge providing optimal network behaviour. When the Data Node asks the Name Node for
location of block data, the Name Node will check if another Data Node in the
same rack has the data. If so, the Name
Node provides the in-rack location from which to retrieve the data. The flow does not need to traverse two more
switches and congested links find the data in another rack. With the data retrieved quicker in-rack, the
data processing can begin sooner, and the job completes that much faster.
Q12. What are the Hadoop Server Roles?
The three major categories of machine roles in a Hadoop deployment are Client machines, Masters nodes, and Slave nodes. The Master nodes oversee the two key functional pieces that make up Hadoop: storing lots of data (HDFS), and running parallel computations on all that data (Map Reduce). The Name Node oversees and coordinates the data storage function (HDFS), while the Job Tracker oversees and coordinates the parallel processing of data using Map Reduce. Slave Nodes make up the vast majority of machines and do all the dirty work of storing the data and running the computations. Each slave runs both a Data Node and Task Tracker daemon that communicate with and receive instructions from their master nodes. The Task Tracker daemon is a slave to the Job Tracker, the Data Node daemon a slave to the Name Node.
Client machines have Hadoop installed with all the cluster settings, but are neither a Master or a Slave. Instead, the role of the Client machine is to load data into the cluster, submit Map Reduce jobs describing how that data should be processed, and then retrieve or view the results of the job when its finished. In smaller clusters (~40 nodes) you may have a single physical server playing multiple roles, such as both Job Tracker and Name Node. With medium to large clusters you will often have each role operating on a single server machine.
In real production clusters there is no server virtualization, no hypervisor layer. That would only amount to unnecessary overhead impeding performance. Hadoop runs best on Linux machines, working directly with the underlying hardware. That said, Hadoop does work in a virtual machine. That’s a great way to learn and get Hadoop up and running fast and cheap. I have a 6-node cluster up and running in VMware Workstation on my Windows 7 laptop.
Q1. How will you write a custom partitioner for a Hadoop job
To have hadoop use a custom partitioner you will have to do minimum the following three- Create a new class that extends Partitioner class
- Override method getPartition
- In the wrapper that runs the Map Reducer, either
- add the custom partitioner to the job programtically using method setPartitionerClass or
- add the custom partitioner to the job as a config file (if your wrapper reads from config file or oozie)
Q2. How did you debug your Hadoop code
There can be several ways of doing this but most common ways are
- By using counters
- The web interface provided by Hadoop framework
Q3. Did you ever built a production process in Hadoop ? If yes then what was the process when your hadoop job fails due to any reason
Its an open ended question but most candidates, if they have written a production job, should talk about some type of alert mechanisn like email is sent or there monitoring system sends an alert. Since Hadoop works on unstructured data, its very important to have a good alerting system for errors since unexpected data can very easily break the job.
Q4. Did you ever ran into a lop sided job that resulted in out of memory error, if yes then how did you handled it
This is an open ended question but a candidate who claims to be an intermediate developer and has worked on large data set (10-20GB min) should have run into this problem. There can be many ways to handle this problem but most common way is to alter your algorithm and break down the job into more map reduce phase or use a combiner if possible.
Q5. Whats is Distributed Cache in Hadoop
Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.
Q6. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it
This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.
Q7. What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application
This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution
Q8. Have you ever used Counters in Hadoop. Give us an example scenario
Anybody who claims to have worked on a Hadoop project is expected to use counters
Q9. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job
Yes, The input format class provides methods to add multiple directories as input to a Hadoop job
Q10. Is it possible to have Hadoop job output in multiple directories. If yes then how
Yes, by using Multiple Outputs class
Q11. What will a hadoop job do if you try to run it with an output directory that is already present? Will it
- overwrite it
- warn you and continue
- throw an exception and exit
The hadoop job will throw an exception and exit.
Q12. How can you set an arbitary number of mappers to be created for a job in Hadoop
This is a trick question. You cannot set it
Q13. How can you set an arbitary number of reducers to be created for a job in Hadoop
You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting
Q22. What is the
difference between a Hadoop database and Relational Database?
Hadoop is not a database rather is a framework which is consist of HDFS (Hadoop distributed file system) File system and Map-Reduce (Used for processing).
Hadoop is not a database rather is a framework which is consist of HDFS (Hadoop distributed file system) File system and Map-Reduce (Used for processing).
Structured data is data that is organized into entities that
have a defined format, such as XML documents or database tables that conform to
a particular predefined schema. This is the realm of the RDBMS. But map reduce
works with Semi Structured Data
(spread sheet), Structured Data
(image files, plain text) because the input keys and values for MapReduce are
not an intrinsic property of the data, but they are chosen by the person
analyzing the data.
HADOOP ADMINISTRATOR INTERVIEW QUESTIONS
Following are some questions and answers to ask a Hadoop Administrator IntervieweeQ1. What are the default configuration files that are used in Hadoop
As of 0.20 release, Hadoop supported the following read-only default configurations
- src/core/core-default.xml
- src/hdfs/hdfs-default.xml
- src/mapred/mapred-default.xml
Q2. How will you make changes to the default configuration files
Hadoop does not recommends changing the default configuration files, instead it recommends making all site specific changes in the following files
- conf/core-site.xml
- conf/hdfs-site.xml
- conf/mapred-site.xml
Unless explicitly turned off, Hadoop by default specifies two resources, loaded in-order from the classpath:
- core-default.xml : Read-only defaults for hadoop.
- core-site.xml: Site-specific configuration for a given hadoop installation.
Hence if same configuration is defined in file core-default.xml and src/core/core-default.xml then the values in file core-default.xml (same is true for other 2 file pairs) is used.
Q3. Consider case scenario where you have set property mapred.output.compress totrue to ensure that all output files are compressed for efficient space usage on the cluster. If a cluster user does not want to compress data for a specific job then what will you recommend him to do ?
Ask him to create his own configuration file and specify configuration mapred.output.compressto false and load this file as a resource in his job.
Q4. In the above case scenario, how can ensure that user cannot override the configuration mapred.output.compress to false in any of his jobs
This can be done by setting the property final to true in the core-site.xml file
Q5. What of the following is the only required variable that needs to be set in file conf/hadoop-env.sh for hadoop to work
- HADOOP_LOG_DIR
- JAVA_HOME
- HADOOP_CLASSPATHThe only required variable to set is JAVA_HOME that needs to point to
Q6. List all the daemons required to run the Hadoop cluster
- NameNode
- DataNode
- JobTracker
- TaskTracker
Q7. Whats the default port that jobtrackers listens to
50030
Q8. Whats the default port where the dfs namenode web ui will listen on
50070
JAVA INTERVIEW QUESTIONS FOR HADOOP DEVELOPER
Q1. Explain difference of Class Variable and Instance Variable and how are they declared in JavaClass Variable is a variable which is declared with static modifier.
Instance variable is a variable in a class without static modifier.
The main difference between the class variable and Instance variable is, that first time, when class is loaded in to memory, then only memory is allocated for all class variables. That means, class variables do not depend on the Objets of that classes. What ever number of objects are there, only one copy is created at the time of class loding.
Q2. Since an Abstract class in Java cannot be instantiated then how can you use its non static methods
By extending it
Q3. How would you make a copy of an entire Java object with its state?
Have this class implement Cloneable interface and call its method clone().
Q4. Explain Encapsulation,Inheritance and Polymorphism
Encapsulation is a process of binding or wrapping the data and the codes that operates on the data into a single entity. This keeps the data safe from outside interface and misuse. One way to think about encapsulation is as a protective wrapper that prevents code and data from being arbitrarily accessed by other code defined outside the wrapper.
Inheritance is the process by which one object acquires the properties of another object.
The meaning of Polymorphism is something like one name many forms. Polymorphism enables one entity to be used as as general category for different types of actions. The specific action is determined by the exact nature of the situation. The concept of polymorphism can be explained as "one interface, multiple methods".
Q5. Explain garbage collection?
Garbage collection is one of the most important feature of Java.
Garbage collection is also called automatic memory management as JVM automatically removes the unused variables/objects (value is null) from the memory. User program cann't directly free the object from memory, instead it is the job of the garbage collector to automatically free the objects that are no longer referenced by a program. Every class inherits finalize() method from java.lang.Object, the finalize() method is called by garbage collector when it determines no more references to the object exists. In Java, it is good idea to explicitly assign null into a variable when no more in us
Q6. What is similarities/difference between an Abstract class and Interface?
Differences- Interfaces provide a form of multiple inheritance. A class can extend only one other class.
- Interfaces are limited to public methods and constants with no implementation. Abstract classes can have a partial implementation, protected parts, static methods, etc.
- A Class may implement several interfaces. But in case of abstract class, a class may extend only one abstract class.
- Interfaces are slow as it requires extra indirection to find corresponding method in in the actual class. Abstract classes are fast.
Similarities
- Neither Abstract classes or Interface can be instantiated
Q7. What are different ways to make your class multithreaded in Java
There are two ways to create new kinds of threads:
- Define a new class that extends the Thread class
- Define a new class that implements the Runnable interface, and pass an object of that class to a Thread's constructor.
Q8. What do you understand by Synchronization? How do synchronize a method call in Java? How do you synchonize a block of code in java ?
Synchronization is a process of controlling the access of shared resources by the multiple threads in such a manner that only one thread can access one resource at a time. In non synchronized multithreaded application, it is possible for one thread to modify a shared object while another thread is in the process of using or updating the object's value. Synchronization prevents such type of data corruption.
- Synchronizing a method: Put keyword synchronized as part of the method declaration
- Synchronizing a block of code inside a method: Put block of code in synchronized (this) { Some Code }
Q9. What is transient variable?
Transient variable can't be serialize. For example if a variable is declared as transient in a Serializable class and the class is written to an ObjectStream, the value of the variable can't be written to the stream instead when the class is retrieved from the ObjectStreamthe value of the variable becomes null.
Q10. What is Properties class in Java. Which class does it extends?
The Properties class represents a persistent set of properties. The Properties can be saved to a stream or loaded from a stream. Each key and its corresponding value in the property list is a string
Q11. Explain the concept of shallow copy vs deep copy in Java
In case of shallow copy, the cloned object also refers to the same object to which the original object refers as only the object references gets copied and not the referred objects themselves.
In case deep copy, a clone of the class and all all objects referred by that class is made.
Q12. How can you make a shallow copy of an object in Java
Use clone() method inherited by Object class
Q13. How would you make a copy of an entire Java object (deep copy) with its state?
Have this class implement Cloneable interface and call its method clone().
Q14. Which of the following object oriented principal is met with method overloading in java
- Inheritance
- Polymorphism
- Inheritance
Polymorphism
Q15. Which of the following object oriented principal is met with method overriding in java
- Inheritance
- Polymorphism
- Inheritance
Polymorphism
Q16. What is the name of collection interface used to maintain unique elements
Map
Q17. What access level do you need to specify in the class declaration to ensure that only classes from the same directory can access it? What keyword is used to define this specifier? It has to have default specifier.
You do not need to specify any access level, and Java will use a default package access level
Q18. What's the difference between a queue and a stack?
Stacks works by last-in-first-out rule (LIFO), while queues use the FIFO rule
Q19. How can you write user defined exceptions in Java
Make your class extend Exception Class
Q20. What is the difference between checked and Unchecked Exceptions in Java ? Give an example of each type
All predefined exceptions in Java are either a checked exception or an unchecked exception. Checked exceptions must be caught using try .. catch() block or we should throw the exception using throws clause. If you dont, compilation of program will fail.
- Example checked Exception: ParseTextException
- Example unchecked exception: ArrayIndexOutOfBounds
Q21. We know that FileNotFoundExceptionis inherited from IOExceptionthen does it matter in what order catch statements for FileNotFoundExceptionand IOExceptipon are written?
Yes, it does. The FileNoFoundExceptionis inherited from the IOException. Exception's subclasses have to be caught first.
Q22. How do we find if two string are same or not in Java. If answer is equals() then why do we have to use equals, why cant we compare string like integers
We use method equals() to compare the values of the Strings. We can't use == like we do for primitive types like int because == checks if two variables point at the same instance of a String object.
Q23. What is "package" keyword
This is a way to organize files when a project consists of multiple modules. It also helps resolve naming conflicts when different packages have classes with the same names. Packages access level also allows you to protect data from being used by the non-authorized classes
Q24. What is mutable object and immutable object
If a object value is changeable then we can call it as Mutable object. (Ex., StringBuffer) If you are not allowed to change the value of an object, it is immutable object. (Ex., String, Integer, Float)
Q25. What are wrapped classes in Java. Why do they exist. Give examples
Wrapped classes are classes that allow primitive types to be accessed as objects, e.g. Integer, Float etc
Q26. Even though garbage collection cleans memory, why can't it guarantee that a program will run out of memory? Give an example of a case when garbage collection will run out ot memory
Because it is possible for programs to use up memory resources faster than they are garbage collected. It is also possible for programs to create objects that are not subject to garbage collection. Once example can be if yuo try to load a very big file into an array.
Q27. What is the difference between Process and Thread?
A process can contain multiple threads. In most multithreading operating systems, a process gets its own memory address space; a thread doesn't. Threads typically share the heap belonging to their parent process. For instance, a JVM runs in a single process in the host O/S. Threads in the JVM share the heap belonging to that process; that's why several threads may access the same object. Typically, even though they share a common heap, threads have their own stack space. This is how one thread's invocation of a method is kept separate from another's
Q28. How can you write a indefinate loop in java
while(true) {
}
OR
for ( ; ; ){
}
Q29. How can you create singleton class in Java
Make the constructor of the class private and provide a static method to get instance of the class
Q30. What do keywords "this" and "super" do in Java
"this" is used to refer to current object. "super" is used to refer to the class extended by the current class
Q31. What are access specifiers in java. List all of them. Access specifiers are used to define score of variables in Java. There are four levels of access specifiers in java
- public
- private
- protected
- default
Q32. Which of the following 3 object oriented principals does access specifiers implement in java
- Encapsulation
- Polymorphism
- Intheritance
Encapsulation
Q33. What is method overriding and method overloading
With overriding, you change the method behavior for a subclass class. Overloading involves having a method with the same name within the class with different signature
it was little helpful
ReplyDeleteVardhan,
ReplyDeleteThis is very informative post about Hadoop. Would you be able to share the VMWare image where you have your hadoop cluster setup? If so please do send me an email to kandula.srinivasarao@gmail.com
-Srini
awesome work!!!
ReplyDeleteThanks,
Higher Level Abstractions for MapReduce - 2 - Hive - Introduction - Hive QL - Hive User Defined Functions - Hive Use Cases - NOSQL Databases - NoSQL Concepts - Review of RDBMS - - Need for NOSQL - Brewers CAP Theorem - ACID vs BASE - Different Types of NoSQL Databases - Key Value - Columnar - Document - Graph - Columnar Databases - Hadoop Ecosystem - HBASE vs Cassandra - HBASE Architecture - HBASE Data Modeling - HBASE Commands - HBASE Coprocessors - Endpoints - HBASE Coprocessors - Observers - SQOOP - Flume & OOZIE.. - http://www.21cssindia.com/courses/hadoop-online-training-182.html
ReplyDeleteEmployees to learn at their own pace and maintain control of learning “where, when and how” with boundless access 24/7by 21st Century Software Solutions. contact@21cssindia.com
Thanks you dude.i have very useful for this reviews.
ReplyDeleteHadoop Training in Chennai
ReplyDeleteinteresting blog. It would be great if you can provide more details about it. Thanks you..
Modular Workstations Chennai
Very good collection of question and answers thank you for sharing this article with us. Know more about Hadoop Administration Training
ReplyDelete