Search found 81 matches

by amit.g512
Sun Aug 28, 2016 10:41 am
Forum: Oracle
Topic: What are the components of Physical database structure of Oracle Database?
Replies: 1
Views: 1588

Re: What are the components of Physical database structure of Oracle Database?

ORACLE database is comprised of three types of files. One or more Data files, two are more Redo Log files, and one or more Control files.
by amit.g512
Sun Aug 14, 2016 12:48 am
Forum: Hadoop
Topic: Any Realtime Hadoop usage?
Replies: 1
Views: 2090

Re: Any Realtime Hadoop usage?

Realtime Hadoop usage at Facebook Facebook deployed Facebook Messages, its first ever user-facing application built on the Apache Hadoop platform. It uses HDFS and HBase as core technologies for this solution. Since then, there are many more applications that have started to used HBase. We have gain...
by amit.g512
Sun Aug 14, 2016 12:41 am
Forum: Hadoop
Topic: What is Secondary Namenode?
Replies: 1
Views: 1745

Re: What is Secondary Namenode?

A Secondary NameNode is another daemon that is responsible for merging edit logs into snapshots. The name 'secondary' can mislead people into thinking this is a 'backup' NameNode, and can take over when NameNode goes down. That is not what Secondary NameNode does. It is actually a 'sidekick' for Nam...
by amit.g512
Sun Aug 14, 2016 12:39 am
Forum: Hadoop
Topic: How is the metadata stored?
Replies: 1
Views: 1864

Re: How is the metadata stored?

We learned NameNode saves metadata on disk. On a busy cluster there could be thousands of file system operations (create, delete, move) per minute and, on a big cluster, the size of the metadata could approach gigabytes. Updating such a large file would be slow and unwieldy. So how does the NameNode...
by amit.g512
Sun Aug 14, 2016 12:31 am
Forum: Hadoop
Topic: What are the two main parts of the Hadoop framework?
Replies: 1
Views: 1855

Re: What are the two main parts of the Hadoop framework?

Hadoop consists of two main parts
Hadoop distributed file system, a distributed file system with high throughput,
Hadoop MapReduce, a software framework for processing large data sets.
by amit.g512
Sun Aug 14, 2016 12:29 am
Forum: Hadoop
Topic: How many maximum JVM can run on a slave node?
Replies: 1
Views: 1642

Re: How many maximum JVM can run on a slave node?

One or Multiple instances of Task Instance can run on each slave node. Each task instance is run as a separate JVM process. The number of Task instances can be controlled by configuration. Typically a high end machine is configured to run more task instances.
by amit.g512
Sun Aug 14, 2016 12:28 am
Forum: Hadoop
Topic: What is the Hadoop MapReduce API contract for a key and value Class?
Replies: 1
Views: 1619

Re: What is the Hadoop MapReduce API contract for a key and value Class?

◦The Key must implement the org.apache.hadoop.io.WritableComparable interface.
◦The value must implement the org.apache.hadoop.io.Writable interface.
by amit.g512
Sun Aug 14, 2016 12:27 am
Forum: Hadoop
Topic: What is the use of Combiners in the Hadoop framework?
Replies: 1
Views: 1678

Re: What is the use of Combiners in the Hadoop framework?

Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a comb...
by amit.g512
Sun Aug 14, 2016 12:26 am
Forum: Hadoop
Topic: Can Reducer talk with each other?
Replies: 1
Views: 1562

Re: Can Reducer talk with each other?

No, Reducer runs in isolation.
by amit.g512
Sun Aug 14, 2016 12:25 am
Forum: Hadoop
Topic: Where the Mapper’s Intermediate data will be stored?
Replies: 1
Views: 1665

Re: Where the Mapper’s Intermediate data will be stored?

The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. This is typically a temporary directory location which can be setup in config by the Hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes.

Go to advanced search