Yesterday brought some nice Excel sheets for keeping track of our data. They make the process much smoother. Ideally, many of these statistics would be gathered by tools we already use for tracking, version control, compiling, editing, etc. This sort of integration will take years to occur.
We added a feedback loop to the PSP. This means that any praise/complaints we have about PSP should be logged in order to improve the process.
We're continuing to measure coding time, number of bugs, etc. The fundamental problem that needs addressing is that code quality totally outweighs code size, functionality, even bugs. Code quality is subjective, and its evidence is often slow to appear. It can be measured in how long it takes [a trusted] someone to understand and use your code, and the faces they make while doing it.
PSP seems to teach that it is more scientific to estimate size of code and calculate hours/days from the size, rather than estimating hours/days directly (as we usually do).
The French teacher repeats often, about every part of the process: Try it. If it works, you keep it, if it doesn't, you drop it. The course is (cleverly) designed to expect each student to customize the process into something that works for them. This is correct.
Because we are all working in an unfamiliar environment, there is ramp-up time. As structured, students become familiar with the tools and start learning PSP simultaneously. Result: at least the first data point should be thrown away. One cannot tell from the data which is responsible for what. Easy to fix by coding a couple programs in this environment before beginning instruction. (This is science 101. Experiments must have control/baseline data.) This data will have to be gathered on the job, after the course. This entire week basically amounts to a trial run; the data and process will not apply directly to work. In a way, this week is also a sales pitch.
It's interesting to hear how our class compares to other classes which have taken this course. They say we're generally faster by a factor or two.
Automatically measuring the complexity of code is an unsolved problem. It reminds me of a program which tried to determine the purpose of code by analyzing the source. "It has a theoretical flair to it that I really like," says the French guy. It's not as hard as taking English and translating it into code (a main part of our job), but it's close. Estimating the complexity of unwritten code is, I think, more difficult than writing the code.
The best practice we've used is also discussed here. This is to have several experienced people get together to estimate, discuss, and converge on an single estimate for each requirement. Simple. Usually works. Takes time. Needs sufficient requirement detail. Not automatic.
There is a chart of how long it took readers to read various chapters of our textbook, Chapter Pages Versus Time. I would like to see a similar chart: Chapter Pages Versus Alcohol Consumption.