IPC (Inter-Process Communication) and RPC (Remote Procedure Call) both solve the same problem: making processes talk to each other. But they do it at very different abstraction levels.
IPC: you manage the communication
IPC is the low-level mechanism. You explicitly open connections, send messages, receive responses.
Common IPC mechanisms:
- Sockets (TCP, UDP, Unix domain sockets)
- Pipes
- Message queues
- Shared memory
With IPC, you're in control of everything: connection management, serialization, error handling, protocol design.
RPC: pretend it's local
RPC is built ON TOP of IPC (usually sockets). It's an abstraction that makes remote calls look like local function calls.
// Looks like a local function call
let result = remote_server.getUserData(user_id);
Under the hood:
- Function call → serialized to message
- Sent via socket (IPC)
- Executed remotely
- Result sent back → deserialized
- Returned to caller
The practical difference
Building a key-value store, here's how each approach looks:
IPC approach (what I'm building)
A CLI tool that you run like zkv connect --addr 127.0.0.1:6379:
$ zkv connect
Connected to zkv server
> SET mykey myvalue
OK
> GET mykey
myvalue
What's happening:
- CLI opens a TCP socket connection
- User types
SET key value - CLI encodes it into wire protocol format
- Sends bytes over socket
- Receives response bytes
- Decodes and prints response
- Connection stays open for more commands
You manage the socket, the protocol, the encoding/decoding. The connection persists.
RPC approach (library)
A library you include in your code:
use zkv::Client;
let client = Client::connect("127.0.0.1:6379")?;
client.set("mykey", "myvalue")?;
let value = client.get("mykey")?;
What's happening (hidden from you):
- Library establishes socket connection internally
client.set()call → serialized to protocol format- Sent over socket
- Response deserialized → returned as typed value
- Connection management handled by library
The socket, protocol, serialization - all abstracted away. You just call functions.
RPC examples
gRPC: Define services in Protocol Buffers, generate client/server code, call methods. Uses HTTP/2 over TCP sockets.
JSON-RPC: HTTP requests with JSON payloads that map to function calls.
Apache Thrift: Similar to gRPC, different IDL and wire protocol.
IPC vs RPC decision
Use IPC when:
- You want full control over the protocol
- Building a CLI tool or custom client
- Performance-critical path (no abstraction overhead)
- Learning how things work under the hood
Use RPC when:
- You want simple function-call semantics
- Building a library for others to use
- Type safety matters (generated code from schemas)
- Multiple language support needed (gRPC has 10+ language bindings)
The abstraction cost
RPC trades control for convenience:
What you gain:
- Simple API (function calls, not socket management)
- Type safety (generated from schema)
- Connection pooling, retries, timeouts handled for you
What you lose:
- Control over wire format
- Visibility into what's happening on the network
- Some performance (extra layers of abstraction)
For most applications, the convenience is worth it. For learning systems programming? IPC teaches you more.
What I'm building
My zkv project is IPC-based - a CLI that manages TCP sockets directly. I'm learning:
- Socket programming
- Wire protocol design
- Message framing
- Connection handling
Could I wrap this in an RPC library later? Absolutely. But understanding the IPC layer underneath makes RPC less magical.
RPC is convenient, but it's built on IPC. Learn IPC first, then RPC makes sense. Start with RPC, and you might never understand what's actually happening on the wire.