Multi-camera vision systems have important application in a number of fields, including robotics and security. One interesting problem related to multi-camera vision systems is to determine the effect of camera placement on the quality of service provided by a network of Pan/Tilt/Zoom (PTZ) cameras with respect to a specific image processing application. The goal of this work is to investigate how to place a team of PTZ cameras, potentially used for collaborative tasks, such as surveillance, and analyze the dynamic coverage that can be provided by them.

Computational Geometry approaches to various formulations of sensor placement problems have been shown to offer very elegant solutions; however, they often involve unrealistic assumptions about real-world sensors, such as infinite sensing range and infinite rotational speed. Other solutions to camera placement have attempted to account for the constraints of real-world computer vision applications, but offer solutions that are approximations over a discrete problem space.

A contribution of this work is an algorithm for camera placement that leverages Computational Geometry principles over a continuous problem space utilizing a model for dynamic camera coverage that is simple, yet representative. This offers a balance between accounting for real-world application constraints and creating a problem that is tractable.

Library of Congress Subject Headings

Robot vision; Robots--Programming; Computer vision; Image processing--Digital techniques; Zoom lens photography

Publication Date


Document Type


Student Type


Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)


Shanchieh Jay Yang

Advisor/Committee Member

Andreas Savakis

Advisor/Committee Member

Chris Homan


Physical copy available from RIT's Wallace Library at TJ211.3 .R87 2006


RIT – Main Campus