본문으로 건너뛰기
버전: 0.2.0

Write Operations

Write

import aerospike_py as aerospike

client = aerospike.client({"hosts": [("127.0.0.1", 3000)]}).connect()
key: tuple[str, str, str] = ("test", "demo", "user1")

# Simple write
client.put(key, {"name": "Alice", "age": 30})

# Supported types: str, int, float, bytes, list, dict, bool, None
client.put(key, {
"str_bin": "hello",
"int_bin": 42,
"float_bin": 3.14,
"list_bin": [1, 2, 3],
"map_bin": {"nested": "dict"},
})

# With TTL
client.put(key, {"val": 1}, meta={"ttl": 300})

# Create only (fail if exists)
client.put(key, {"val": 1}, policy={"exists": aerospike.POLICY_EXISTS_CREATE_ONLY})

Update

client.increment(key, "age", 1)
client.increment(key, "score", 0.5)
client.append(key, "name", " Smith")
client.prepend(key, "greeting", "Hello, ")

Delete

client.remove(key)

# With generation check
client.remove(key, meta={"gen": 5}, policy={"gen": aerospike.POLICY_GEN_EQ})

# Remove specific bins
client.remove_bin(key, ["temp_bin", "debug_bin"])

Touch (Reset TTL)

client.touch(key, val=600)  # or: await client.touch(key, val=600)

Multi-Operation (Operate)

Execute multiple operations atomically on a single record.

ops: list[dict] = [
{"op": aerospike.OPERATOR_WRITE, "bin": "name", "val": "Bob"},
{"op": aerospike.OPERATOR_INCR, "bin": "counter", "val": 1},
{"op": aerospike.OPERATOR_READ, "bin": "counter", "val": None},
]
record = client.operate(key, ops)
print(record.bins["counter"])

# Ordered results
result = client.operate_ordered(key, ops)
for bt in result.ordered_bins:
print(f"{bt.name} = {bt.value}")

Batch Write

Write multiple records with per-record bins in a single batch call. This is the batch version of put() — each record can have different bin names and values.

records = [
(("test", "demo", "user1"), {"name": "Alice", "age": 30}),
(("test", "demo", "user2"), {"name": "Bob", "age": 25}),
(("test", "demo", "user3"), {"name": "Charlie", "age": 35}),
]
results = client.batch_write(records)
for br in results.batch_records:
if br.result != 0:
print(f"Failed: {br.key}, code={br.result}, in_doubt={br.in_doubt}")

Retry with auto-recovery: Records that fail with transient errors (timeout, device overload, key busy) are automatically retried with exponential backoff:

# Retry failed records up to 5 times
results = client.batch_write(records, retry=5)
in_doubt flag

When br.in_doubt is True, the write may have completed on the server despite the error (e.g., timeout after the write was sent). Check in_doubt before retrying to avoid duplicate writes on non-idempotent operations.

Batch Operate / Remove

# Batch operate — returns BatchRecords (same as batch_read)
ops = [{"op": aerospike.OPERATOR_INCR, "bin": "views", "val": 1}]
results = client.batch_operate(keys, ops)
for br in results.batch_records:
if br.result == 0 and br.record is not None:
print(br.record.bins)

# Batch remove
results = client.batch_remove(keys)
for br in results.batch_records:
if br.result != 0:
print(f"Failed to remove: {br.key}")

Optimistic Locking

from aerospike_py.exception import RecordGenerationError

record = client.get(key)
try:
client.put(
key,
{"val": record.bins["val"] + 1},
meta={"gen": record.meta.gen},
policy={"gen": aerospike.POLICY_GEN_EQ},
)
except RecordGenerationError:
print("Concurrent modification, retry needed")

Tips

  • Batch size: 100-5,000 keys per batch is optimal. Very large batches may timeout.
  • Timeouts: Increase total_timeout for large batch operations.
  • Error handling: Individual batch records can fail independently. Always check br.record for None.