Question 61 :
MPI_send used for
- collect message
- transfer message
- send message
- receive message
Question 62 :
The average number of steps taken to execute the set of instructions can be made to be less than one by following _______ .
- Sequentional
- super-scaling
- pipe-lining
- ISA
Question 63 :
UMA architecture uses _______in design
- cache
- shared memory
- message passing
- distributed memory
Question 64 :
Processing of multiple tasks simultaneously on multiple processors is called
- Parallel processong
- Distributed processing
- Uni- processing
- Multi-processing
Question 65 :
The cost of a parallel processing is primarily determined by
- switching complexity
- circuit complexity
- Time Complexity
- space complexity
Question 66 :
types of HPC application
- Mass Media
- Business
- Management
- Science
Question 67 :
Multiple application independently running are typically called
- Multiprograming
- multiithreading
- Multitasking
- Synchronization
Question 68 :
Virtualization that creates one single address space architecture that of, is called
- Loosely coupled
- Space based
- Tightly coupled
- peer-to-peer
Question 69 :
characteristic of CISC (Complex Instruction Set Computer)
- Variable format instruction
- Fixed format instructions
- Instruction are executed by hardware
- unsign long char
Question 70 :
simple application of exploratory decomposition is_
- The solution to a 15 puzzle
- The solution to 20 puzzle
- The solution to any puzzle
- None of Above
Question 71 :
Speedup can be as low as____
- 1
- 2
- 0
- 3
Question 72 :
The fraction of data references satisfied by the cache is called_
- Cache hit ratio
- Cache fit ratio
- Cache best ratio
- none of above
Question 73 :
Zero address instruction format is used for
- RISC architecture
- CISC architecture
- Von-Neuman architecture
- Stack-organized architecture
Question 74 :
This algorithm is a called greedy because
- the greedy algorithm never considers the same solution again
- the greedy algorithm always give same solution again
- the greedy algorithm never considers the optimal solution
- the greedy algorithm never considers whole program
Question 75 :
Parallel Algorithm Models
- Data parallel model
- Bit model
- Data model
- network model
Question 76 :
A multiprocessor machine which is capable of executing multiple instructions on multiple data sets
- SISD
- SIMD
- MIMD
- MISD
Question 77 :
This is computation not performed by the serial version
- Excess Computation
- serial computation
- Parallel Computing
- cluster computation
Question 78 :
The primary forms of data exchange between parallel tasks are_
- Accessing a shared data space
- Exchanging messages.
- Both A - B
- None of Above
Question 79 :
MPI_Init
- Close MPI environment
- Initialize MPI environment
- start programing
- Call processes
Question 80 :
Memory management on a multiprocessor must deal with all of found on
- Uniprocessor Computer
- Computer
- Processor
- System
Question 81 :
Partitioning refer to decomposing of the computational activity as
- Small Task
- Large Task
- Full program
- group of program
Question 82 :
Memory system performance is largely captured by_
- Latency
- Bandwidth
- Both a and b
- none of above
Question 83 :
An interface between the user or an application program, and the system resources are
- microprocessor
- microcontroller
- multi-microprocessor
- operating system
Question 84 :
In All-to-All Broadcast each processor is the source as well as destination.
- TRUE
- False
Question 85 :
A _________ computation performs one multiply-add on a single pair of vector elements
- dot product
- cross product
- multiply
- add
Question 86 :
The disadvantage of using a parallel mode of communication is ______
- Leads to erroneous data transfer
- It is costly
- Security of data
- complexity of network
Question 87 :
In MPI programing MPI_Reduce is the instruction for
- Full operation
- Limited operation
- reduction operation
- selected operation
Question 88 :
Speculative Decomposition consist of _
- conservative approaches
- optimistic approaches
- Both A - B
- Only B
Question 89 :
Parallel processing may occur
- In the data stream
- In instruction stream
- In network
- In transferring
Question 90 :
what is WAR
- Write before read
- write after write
- write after read
- write with read