Skip to main content

A Fun Dive into Python Multiprocessing and Z3

·5 mins
python smt z3
Table of Contents

For the last couple of weeks, I have been working on a project that involves solving a large number of SMT specifications using the Z3 solver. These specifications, gathered from various public sources and written in SMT-LIB format, needed to be parsed, validated for compatibility with Z3, and solved efficiently.

I ran into some big performance issues at the start. Trying to handle each specification one at a time just wasn’t going to work with all the data I had. So, I decided to use Python’s multiprocessing to speed things up a bit.

The “Brilliant” First Plan: Threading
#

I worked with multithreading in C before, and my first thought was, “Why not use threads?” So, I jumped into some Python threading fun and set up a thread for each SMT spec. The plan was simple: let each thread tackle a spec and write the results when it was done. But I had a couple of issues to sort out: if a spec took too long (like over 5 minutes), I needed a way to kill its thread. So, I decided to go with the threading module.

solver_thread = threading.Thread(target=run_solver)
solver_thread.start()
solver_thread.join(300)
if solver_thread.is_alive():
    solver_thread.kill()
    solver_thread.join()

This was basically like slapping duct tape on a leaky pipe when it came to programming. Just killing the thread didn’t actually stop the solver process; it kept running in the background. Even though Z3 has a timeout feature (solver.set(timeout=300)), it didn’t work quite right for me. Plus, Python’s Global Interpreter Lock (GIL) got in the way of real parallelism, which made using threads a bad idea for this CPU-heavy task.

Second Try: Multiprocessing
#

Recognizing that Z3 is not thread-safe and my task was CPU-intensive, I turned to Python’s multiprocessing module. I utilized ProcessPoolExecutor from the concurrent.futures module to create an efficient solution for managing multiple processes.

specifications = [] # list of specifications
results = [] # list of results
with concurrent.futures.ProcessPoolExecutor(max_workers=48) as executor:
    for result in executor.map(process_smt_specifications, specifications):
results.append(result)

The process_smt_specifications function reads the specification, solves it, and returns the result. The executor.map function distributes the work among the processes. The max_workers parameter specifies the number of worker processes to use. I set it to 48, half of the number of cores on the server, because the solver consumes a lot of memory, and I don’t want to interrupt the other applications running on the server.

It worked! Well, Kind of. With 48 workers, things sped up considerably. But, like everything good, there was a catch: after a few hours of running the code, I noticed that no z3 processes showed up, and no new ones started, even though memory usage kept going up steadily. After some digging, I found out that the process pool executor tries to map all the work (millions) at once, which can lead to memory issues. So, I decided to process the work in batches and use the concurrent.futures.as_completed function instead of executor.map. The as_completed function returns an iterator that yields the results of the tasks.

...
batch_size = 1000 # number of specifications to process in each batch
for i in range(0, len(specifications), batch_size):
    batch = specifications[i : i + batch_size]
    with concurrent.futures.ProcessPoolExecutor(
        max_workers=max_worker
    ) as executor:
        future_to_file = {
            executor.submit(process_smt_specifications, file): file for file in batch
        }
        futures = list(future_to_file.keys())
        for future in concurrent.futures.as_completed(futures):
            file = future_to_file[future]
            try:
                results.append(future.result())
            except Exception as e:
                logger.error(f"Error processing file {file}: {e}")

This code worked as expected, and memory consumption was stable. Performance was also better than in the previous version.

Parsing Specifications with Z3
#

The SMT specs were in SMT-LIB format, which included lots of different logic and theories. I had to figure out which ones worked with Z3. While Z3’s from_file() function shows compatibility results in the console, I needed to catch that output programmatically, so I had to redirect the standard error stream.

with tempfile.TemporaryFile(mode="w+") as temp_file:
	file_descriptor = os.dup(sys.stderr.fileno())
	try:
		os.dup2(temp_file.fileno(), sys.stderr.fileno())
		solver = z3.Solver()
		solver.from_file(file_path)
		temp_file.flush()
		temp_file.seek(0)
		dup_stderr = temp_file.read()
	except Exception as e:
		return False
	finally:
		os.dup2(ile_descriptor, sys.stderr.fileno())
		os.close(file_descriptor)
	if "unsupported" in dup_stderr.lower():
		return False
	return True

If the output contains “unsupported,” I would skip that spec since it has unsupported theories or logic. It wasn’t elegant, but it worked.

Solving Specifications via Subprocess
#

Since Z3’s Python API has its quirks, I decided to handle the specs using the Z3 CLI through subprocess. This way, I got solid timeout handling in place.

try:
	proc = subprocess.Popen(
		["z3", "-in"],
		stdin=subprocess.PIPE,
		stdout=subprocess.PIPE,
		stderr=subprocess.PIPE,
		text=True,
	)
	stdout, _ = proc.communicate(input=smt_spec, timeout=timeout)
	output_lines = stdout.splitlines()
	results = [
		line for line in output_lines if line in {"sat", "unsat", "unknown"}
	]
	return results
except subprocess.TimeoutExpired:
	proc.kill() 
	proc.communicate()
	return "timeout"
except Exception as e:
	return str(e)

This solution worked flawlessly, offering reliable performance and maintaining system stability.

Unresolved Issues
#

Despite the improvements, I encountered occasional idle periods during which CPU usage dropped to near zero while memory usage stayed steady, so there were no leaks. Initial investigations ruled out CPU throttling, as the temperature and frequency were stable. I suspect these idle periods are related to the batch processing mechanism, where the executor may be waiting for processes to complete before starting new batches.

This project was a rewarding learning experience, showcasing the power and challenges of multiprocessing in Python.