I've decided to write a post about some project ideas I've had lately but likely will never have the time to dedicate to. I'm hoping someone else, possibly you 🙂 might find these interesting to work on. Feel free to let me know in the comments section if you decide to have a go at one of these, would like more information/references, or just think its not going to work (maybe the case with idea 1?).
1. Automated OProfile profiling of Mesa via Phoronix Test Suite
To be honest I haven't used OProfile  yet, so I'm not 100% sure yet if this idea makes sense. Currently part of the reason the Open Source drivers have lower performance levels to their closed source counterparts is due to increased CPU overhead. It would be great if profiling was built into the Phoronix Test Suite  that way you could analyse the outputs of both tools to do things like look at what was executing when the frame-rates dropped and what percentage of time was spent in those code paths at that point in time. I assume you could do this by using three benchmark runs to see when frames normally drop or cpu usage spikes and enabling/disabling Oprofile a little before/after that time into a 4th run. Maybe you would need to do an additional run for each spot to be analysed so that turning on Oprofile doesn't throw the timing out. This type of feature could also be useful for spotting CPU related performance regressions.
2. Do analysis of OpenGL driver quality using Piglit
Piglit is a collection of automated tests for OpenGL and OpenCL implementations. The goal of Piglit is to help improve the quality of open source OpenGL and OpenCL drivers by providing developers with a simple means to perform regression tests.
There has been a lot of attention on the quality of and differences between OpenGL drivers lately. Especially after Valve's Rich Geldreich posted an critical opinion piece on the state of the current drivers . It would be interesting to use piglit to create a picture of driver quality and differences. The idea would be to produce a result similar to the graph and pdf produced with the g-truc samples . I have posted a question to the piglit mailing list to gather ideas on how you would go about this .
3. Create a distributed compute network to find possible VRAM optimisations.
Lauri Kasanen created an artificial intelligence project  to attempt to find a better strategy for handling VRAM. Two of the conclusions Lauri came to in his thesis  are as follows:
“First, the parameters can be adequate, and merely more computing time is needed to find a better solution.”
“Second, it is possible the input parameters are adequate, but the processing power of the network is not. If so, it would need more hidden nodes, which would also mean slower training. This is hinted towards by the inability of the current network to do well at both 64 and 128 MB VRAM and the higher amounts.”
For someone interested in artificial intelligence it might be an interesting project to look at Lauri's work, expanding upon it and porting (all available on github ) to a distributed computing network platform such as Boinc  where the open source community could then use its collective computing power to attempt to find a better solution.
The Boinc website has some information on porting software and creating a Boinc server .
[5a] Thesis Repo: https://github.com/clbr/jamkthesis
[5b] Code and Data: https://github.com/clbr/hotbos