Both of the classes I am currently teaching are in rooms set up for video recording; interested readers can find the results here and here. I arranged it that way partly for the convenience of students who miss a class or want to review, more to make the classes available to anyone in the world with an internet connection who is interested.
There is, however, one problem. The camera is set up with a field of view that is virtually the entire width of the wall I am standing in front of. The result is to make the image of my face too small to readily read my expression, which largely eliminates the advantage of having video as well as audio recording. The resulting recordings do not look nearly as good as the recordings of public lectures of mine that have been made on various occasions and webbed.
The simple solution, which I hope to persuade the people in charge of the system to adopt next time I use it, would be to narrow the field of view down so it only covered the central location where I am normally standing. One disadvantage of that as a general solution to the problem is that professors sometimes move around to make use of the whiteboard, which also runs the full width of the wall, or for other reasons.
That problem could be solved by having a human behind the camera, either physically present or via remote control, pointing it at the professor's face, wherever it happens to be. But that would significantly raise the cost of recording lectures. Part of the attraction of the present system is that it does not require any human intervention.
Which suggests that what we really need is a robot cameraman. Modern cameras have face detection software that seems to work pretty well. It should be possible to use such software to detect where the speaker, the only person at the front of the room facing the camera, is standing, and automatically focus on him. I do not know if such equipment exists yet, but it should.
The novelty requirement in U.S. patent law bars the patenting of an invention that has been publicly disclosed less than a year before the patent is filed. Start your engines.
12 comments:
What you want is an automatic PTZ (pan-tilt-zoom) camera. These are available for sale especially as security cameras.
iSpy is a free open source piece of software that has image processing algorithms that automate PTZ tracking for moving objects. I've seen it work well as a proof of concept but not sure it would live up to real world usage like you require.
Chris Terman at MIT has mastered the art of filming his own lectures in a completely DIY fashion. He has his own mic, laptop with webcam, and external camera. He records an audio stream from his mic and three simultaneous video streams: one via his laptop's webcam (of himself speaking), a second as a screencast (of his lecture slides), and a third from his external camera (of he chalkboards).
This is packaged as a single video file with multiple camera angles, so you can pick whatever part of the class you want to see at any given time.
Videoconference systems detect and focus on the speaker. Takes two mics and some software.
Your problem is more complicated than above commenters realize. Your camera is positioned at the back of a lecture hall, so you probably don't have the resolution for most facial recognition software's track and zoom technology, and besides it would be pretty pixelated anyway. To get the effect you want you'd probably need tracking technology in the camera that interfaces with a powered tripod head. You might be able to rent that for less than just hiring a student to man the camera, but the student would probably do a better job and wouldn't require time before each class to set up and calibrate. Also, the student only requires a bottle of water and a granola bar, while a powered tripod requires electricity. If you can find an able videographer who's interested in the subject matter but isn't enrolled at your school maybe you could just barter services. If I lived in your neck of the woods I'd take that gig in a heartbeat. A Set of David Friedman Lectures looks great on a demo reel, too.
Regarding patent law, I think you mean "more than a year." Also, that refers to disclosure by the inventor; so if you wait until December of 2013 to file a patent application on something you have just disclosed in your weblog, you could be rejected on that ground.
On the other hand, someone else's application could be rejected even if filed one day after your public disclosure; your weblog posting would be prior art.
There are other issues, such as whether your disclosure is enabling, and whether the general idea that it would be nice to have a robot cameraman would be enough to anticipate or render obvious a claimed invention which discloses such a thing in detail, and sets recites specific claims, but I'm not going into all of that.
DISCLAIMER: I am not speaking for the U.S. Patent Office, and not representing myself as an authority on patent law or procedure.
Why not just ask one of the present students to volunteer as camera operator each lecture?
I'm thinking some sort of tracker device in the lecturer's pocket. the camera simply follows the tracker as the lecturer moves back and forth.
Remember, a camera can't move on its own. It needs a powered tripod head, and if the movement is going to follow a digitally-recognized face that tripod head is going to have to interface somehow with the tracking technology that is presumably in the camera. This will be expensive and laborious to calibrate before each class, and it will be prone to failure every time you walk into a poorly lit area or turn away from the camera.
A student already taking the course might not make a good videographer, because that student would likely be hindered in his effort to take notes or participate in discussions.
The best bet is to hire (or barter with) an outside videographer. I guarantee you could find somebody who would donate his services in exchange for access to the lectures and the right to use them in his demo reel (and possibly a bottle of water and a granola bar each class period).
I emailed someone at the university who seems to be in charge of such things, and got a response. Apparently there is an existing technology that uses a pressure sensitive mat, running along the lecturer's end of the classroom, to tell the camera where he is and so where it should be pointed. He is hoping that such will be included in the new building we are planning. He also mentioned the possibility of having one person controlling cameras in multiple classrooms.
Hello! I just would like to give a huge thumbs up for the great info you have here on this post. I will be coming back to your blog for more soon.
[url=http://cheapnfljerseys168.66ghz.com/]NFL Authentic Jerseys[/url]
[url=http://jerseys2017.a0001.net/]New Orleans Saints Jerseys[/url]
Lol robot cameraman, I like it :)
Post a Comment