Fooey, now this class has homework, too. All homework should be emailed to me. I will send you a reply when I get your mail. If you do not get a reply, I did not get your email. You are responsible for having an email account that can communicate successively with me. I am flexible on which of my email accounts you use up to a degree. I will send you email from GMail, however.
Homework Due Worth hw1 9/2008 20 hw2
12/4/2008 200 hw3 10/14/2008 200 hw4 11/25/2008 200 hw5 11/25/2008 100 hw6 12/4/2008 100
Since no one in the class seems to know C++ and the textbook requires knowledge of C++, you are to write a C++ class that is relatively simple, but not completely trivial. Ideally it should have
- a header file that another program can include using the #include C++ construct.
- an implementation file of what is declared in the header file.
- a third file with a main program that uses the class.
You should try to use g++ or the Intel C++ compiler since these are the most common compilers used on academic parallel computers. Windows and Mac OS X are almost unheard of as platforms for parallel computing. OS X has the advantage of being a UNIX system and most software found on parallel computers will compile and run easily as a result.
The homework that I have received is the following:
Implement iterative improvement for a parallel computer, which is on page 48 (or thereabouts) of the class notes. You will do this assignment in stages that will take awhile. The ISC cluster on the second floor of Ross Hall is still dead. An effort to get it going again has met resistance, but is being pushed back. I will see if I can
Find and identify a machine that runs mpich. Some Linux distributions have mpich already built and ready to use, e.g., Ubuntu and OpenSuSE. Avoid Fedora like the plague. If you have to use a Red Hat Linux inspired distribution, try CentOS. By far the easiest distribution is Ubuntu. Mike Sollami has installed Ubuntu and can offer the rest of the class advise. Joyce Rigelo is using an iMac that has mpich installed. I have mpich installed on Macs, too. I have installed it on many Linux distributions in the past.
Write a Matlab implementation of iterative improvement. Use the tridiagonal matrix A = [-1,2,-1] with b = (1,1,...,1)T. for starters (make A and b parameters to your function so that you can solve other Ax=b problems). Remember to make A, x, and b be single precision data. The residual r should be calculated with A, x, and b as double precision. The easiest way in Matlab to do this is to store A, x, and b twice (once in single precision and once more in double precision: you have to use different variable names to do this, of course).
Construct a C++ program using as many components as you can find from the CDROM that accompanies the textbook. Between the parallel LU decomposition and solve routines plus the conjugate gradient routine, you should be able to find almost everything you need already. You will have to put it together. The first thing you should do is identify which files on the CDROM are useful. Then you should try to put it all together.
Make the code work on 1 core on 1 processor first. Then try 2 threads on 1 core on 1 processor. Then try 8 threads on 1 core on 1 processor. Finally watch it work flawlessly on a real parallel computer. Remember: I debug all of my parallel codes on airplanes. I have only once had a program not run the first time on a parallel computer (and it was not my fault, so there... huh!).
In the class notes are some lemmas about the convergence of the conjugate gradient method that I did not prove. Prove them. Consider this assignment to be akin to a take home midterm based on its weight in the homework table. You need to decide if all of the lemmas need to be proven as one comprehensive induction proof or if you can prove them individually. The theorem can be proven once the lemmas are proven independent of your choice.
From the textbook,
- Chapter 3, Section 3.5.1, problems 3, 5, 6(a)-(c), and 8.
- Chapter 4, Section 4.4.1, problems 10, 13.
- Chapter 4, Section 4.4.2, problems 1, 4, 5, 7.
From the class notes on Automatic Differentiation, experiment with an exisitng AD tool that you find on the Internet. Google "automatic differentiation" and select a tool. Learn how to use it by writing some functions in a language your selected tool can handle and letting the tool write AD code for you. Turn in a description of which tool you used, the language you wrote your functions in, and the output for each function. Evaluate how easy the process was.
Write a short report on an aspect of Monte Carlo methods from the list of 8 topics below:
Do a reply all and announce your topic. The first person to choose a topic (based on the send time) gets it. Write a 2 page report on your topic in LaTeX or Word and send it to me before class on Thursday. Do not copy and paste from Wikipedia. Look beyond there. Provide citations.
- direct sampling
- stratified sampling (including recursive forms)
- Gibbs sampling (Pani Fernando)
- Metropolis-Hastings algorithm (Mike Bostick)
- evolution strategy (Phil Meister)
- genetic algorithms (Mike Sollami)
- simulated annealing (Joyce Rigelo)
- stochastic tunneling (Meng Xu)