Free Download Latest 2014 Pass4sure&Lead2pass Cloudera CCA-332 Dumps

Vendor: Cloudera
Exam Code: CCA-332
Exam Name: Cloudera Certified Administrator for Apache Hadoop

QUESTION 1
Hadoop provider a web app for all of the following EXCEPT: (choose 1)

A.    Keeping track of the number of files and directories stored in HDFS.
B.    Keeping track of jobs running on the cluster.
C.    Browsing files in HDFS.
D.    Keeping track of tasks running on each individual slave node.
E.    Keeping track of processor and memory utilization on each individual slave node.

Answer: A

QUESTION 2
You have a cluster running with the Fair in Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit job B, Now job A and job B are running on the cluster at the same time.
Which of the following describes how the Fair Scheduler operates? (Choose 2)

A.    Whenjob B gets submitted, it will getassigned tasks, while job A continues to run with fewer tasks.
B.    When job A gets submitted, it doesn’t consume all the task slots.
C.    When job A gets submitted, it consumes all the task slots.
D.    When job B gets submitted, job A has to finish first, before job B can get scheduled.

Answer: C

QUESTION 3
Which of the following statements is the most accurate about the choice of operating systems to run on a Hadoop cluster?

A.    Linux and Solaris/OpenSolaris are preferable to Windows. Solaris running on SPARC hardware isthe preferred Hadoop slave node configuration.
B.    Linux is preferable to Windows and Solaris/OpenSolaris. Some Linux distributions areintended for cluster environments more than others.
C.    The choice of operating system isn’t very important for a Hadoop cluster.
D.    Linuxis preferable to Windows and Solaris/OpenSolaris, but the choice of Linux distribution isunimportant when planning a Hadoop cluster.

Answer: B

QUESTION 4
It is recommended that you run the HDFS balancer periodically. Why? (Choose 3)

A.    To improve data locality for MapReduce tasks.
B.    To ensure that there is capacity in HDTS for additional data.
C.    To help HDFS deliver consistent performance under heavy loads.
D.    To ensure that all blocks in the cluster are 128MB in size.
E.    To ensure that there is consistent diskutilization across the DataNodes.

Answer: BCE

If you want to pass Cloudera CCA-332 successfully, donot missing to read latest lead2pass Cloudera CCA-332 practice tests.
If you can master all lead2pass questions you will able to pass 100% guaranteed.

http://www.lead2pass.com/CCA-332.html