About Me

My photo
Vijayapur, Karnataka, India
I am interested in Teaching.

Monday, 8 September 2025

PARALLEL COMPUTING (BCS702) Program 6: Write a MPI program to demonstration of deadlock using point to point communication and avoidance of deadlock by altering the call sequence

 Department of Computer Science & Engineering, BLDEACET, Vijayapura

13 Lab Manual : PARALLEL COMPUTING (BCS702)

Program 6: Write a MPI program to demonstration of deadlock using point to point communication and avoidance of deadlock by altering the call sequence

Objective: Demonstrate deadlock and its avoidance using MPI.

Part A: Deadlock Example

Code (Deadlock-prone)

// mpi_deadlock.c

#include <stdio.h>

#include <mpi.h>

int main(int argc, char* argv[]) {

int rank, size, data;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);

if (rank == 0) {

int msg = 100;

MPI_Recv(&data, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);

MPI_Send(&msg, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);

} else if (rank == 1) {

int msg = 200;

MPI_Recv(&data, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);

MPI_Send(&msg, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);

}

MPI_Finalize();

return 0;

}


Explanation:

#include <stdio.h>

#include <mpi.h>

int main(int argc, char* argv[]) {

    int rank, size, data;

    MPI_Init(&argc, &argv);                  // Start MPI environment

    MPI_Comm_rank(MPI_COMM_WORLD, &rank);    // Get process rank (0, 1, ...)

....
  • MPI_Init → Initializes MPI.

  • MPI_Comm_rank → Gives each process a unique ID (rank).

    • Example: If you run with 2 processes → one will have rank=0, the other rank=1.

Process 0 (rank = 0)

if (rank == 0) { int msg = 100; MPI_Recv(&data, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); MPI_Send(&msg, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); }
  • Creates an integer message msg = 100.

  • First action → MPI_Recv: process 0 waits to receive an integer from process 1.

  • Only after receiving, it will send its own message (100) to process 1.


👀 Process 1 (rank = 1)

else if (rank == 1) { int msg = 200; MPI_Recv(&data, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); MPI_Send(&msg, 1, MPI_INT, 0, 0, MPI_COMM_WORLD); }
  • Creates an integer message msg = 200.

  • First action → MPI_Recv: process 1 waits to receive an integer from process 0.

  • Only after receiving, it will send its own message (200) to process 0.


❌ The Problem (Deadlock)

  • Process 0 → waiting for data from Process 1 (via MPI_Recv).

  • Process 1 → waiting for data from Process 0 (via MPI_Recv).

👉 Both are stuck waiting forever.
Since neither sends before receiving, no data is sent, so both processes are blocked.
This situation is called a deadlock.

Conclusion:
Both processes wait for Recv first, which leads to a deadlock as neither can proceed to Send.

Sample Output (Deadlock)
$ mpirun -np 2 ./mpi_deadlock
# Program hangs indefinitely — no output is produced

Part B: Deadlock-Free Version
Code (Avoiding Deadlock by Call Order)
// mpi_no_deadlock.c
#include <stdio.h>
#include <mpi.h>
int main(int argc, char* argv[]) {
int rank, data;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
int msg = 100;
MPI_Send(&msg, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
MPI_Recv(&data, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process 0 received %d from Process 1\n", data);
} else if (rank == 1) {
int msg = 200;
MPI_Send(&msg, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
MPI_Recv(&data, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process 1 received %d from Process 0\n", data);
}

Explanation:

 Code (Deadlock-Free)

#include <stdio.h> #include <mpi.h> int main(int argc, char* argv[]) { int rank, data; MPI_Init(&argc, &argv); // Step 1: Start MPI environment MPI_Comm_rank(MPI_COMM_WORLD, &rank); // Step 2: Get process rank (0 or 1)
  • MPI_Init → starts MPI.

  • MPI_Comm_rank → gives each process a unique rank (0 or 1 here).


👀 Process 0 (rank = 0)

if (rank == 0) { int msg = 100; MPI_Send(&msg, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); // Step 3: Send first MPI_Recv(&data, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); // Step 4: Receive later printf("Process 0 received %d from Process 1\n", data); }
  • Creates message msg = 100.

  • First action → MPI_Send: sends 100 to process 1.

  • Then it waits to receive an integer from process 1.

  • Finally prints:

    Process 0 received 200 from Process 1

👀 Process 1 (rank = 1)

else if (rank == 1) { int msg = 200; MPI_Send(&msg, 1, MPI_INT, 0, 0, MPI_COMM_WORLD); // Step 3: Send first MPI_Recv(&data, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); // Step 4: Receive later printf("Process 1 received %d from Process 0\n", data); }
  • Creates message msg = 200.

  • First action → MPI_Send: sends 200 to process 0.

  • Then it waits to receive an integer from process 0.

  • Finally prints:

    Process 1 received 100 from Process 0

✅ Why This Code Does NOT Deadlock

  • In Part A, both processes did MPI_Recv first → they blocked forever.

  • In Part B, both processes do MPI_Send first → message is sent immediately and stored in MPI’s buffer.

  • Then when they call MPI_Recv, the matching message is already available → they succeed.

Thus, no process gets stuck. 🎯


🔑 Key Takeaway

  • Ordering matters in MPI.

  • If you do Recv first on both sides → ❌ deadlock.

  • If you do Send first → ✅ works fine (because MPI buffers the outgoing message until the other side receives it).


No comments:

PARALLEL COMPUTING (BCS702) Program 6: Write a MPI program to demonstration of deadlock using point to point communication and avoidance of deadlock by altering the call sequence

  Department of Computer Science & Engineering, BLDEACET, Vijayapura 13 Lab Manual : PARALLEL COMPUTING (BCS702) Program 6: Write a MPI ...