Wednesday, May 14, 2014

Final thoughts on the class

It's been an interesting ride for sure.  I have some mixed feelings I guess.

The Good
The course has a 'real-world' feel to it that isn't normally felt as much in other CS courses.  We talk about industry standards, current trends, and how stuff actually gets made and (gasp) marketed.  Being allowed to choose all of your technologies as well as the project content allows for creativity.  Ackley is a great speaker and I think we all enjoyed his lectures greatly.  Our finished product for the class gives us way more pride than getting an A on some final exam.

The Bad
Sometimes, it didn't seem like the class was teaching us anything.  It's very non-traditional; at first I really like this, but it started to wear on me after a while.  I was trying to figure out if this was a good thing still.  I think that overall, it is good, but the class could use a wee bit more structure and consistency.  Students often don't know what comes next.  Frankly, the 'grading scale' scares us; we have little clue what it is or how it works and accordingly don't feel in control of our academic success in the course.

The Ugly
I think future iterations of the class should do away with personal project ownership.  I like that everyone proposes a project and the best ones get made.  However, I don't think the owners' of them should have to do pitches to 'sell' them to their classmates.  We get plenty of practice with project pitches in the latter part of the course.  Related to this, I'm not sure that having team leaders is a good idea either - this (naturally) seems to lead to contention from some team members who feel their grade is negatively impacted (via not being positively impacted) by not being a team leader.  It also adds stress to team leaders that they didn't sign up for.  I don't think the course is more valuable for having people directly labeled as 'project lead'.


Sure was some of the tougher 3 credit hours I've earned during my time here.

Making a 'software time capsule'

That's kind of what it felt like today, to me, when we were wrapping up all the components of this project (code and non-code elements alike) into a single tarball.  I was picturing somebody opening this file up in 30 years (if they could...) and looking through it.  Would they be impressed?  Disgusted?  Would they laugh at how 'hard' or 'crazy' software engineering was in our day, like how we laugh at old-timers when they talk about punch cards?  I also wondered if people that far from now would understand what the product was for and what it wasn't for.  I really kind of wished we had made more documentation to go with the shipment - I guess the project proposal describes the purpose of most of it though.

It also felt a little odd to 'pull the trigger' and ship the thing.  For one, there's always that lingering feeling of "is everything good in there?" (at least for me).  But also it's this feeling that the project is over, and probably will stay in it's neat archive for the rest of it's days "being looked at by top men".  In some ways that makes me sad, but I'm not sure that I have the expertise yet to bring a product like this to market yet.  Perhaps one day, future earthlings (human or not) will unearth the project and bring it back to life with their super futuristic computing technologies.  Or maybe they'll find it quaint when they discover it, already having an ultra-advanced case-based diagnostic program of some kind.

Tuesday, May 13, 2014

I'll take it!

We got 2nd place, and I'm more than satisfied with that.

I was SUPER impressed with the presentations today.  I had written in my last post that I wasn't seeing the point of doing so many practice pitches.  It kind of 'clicked' for me as I was watching the final ones today.  I was seeing each group give a killer talk, and remembering pieces of their 1st and 2nd presentation attempts.  What a difference indeed!  I had almost no complaints about the presentations today, whereas in the first couple, I could have come up with decent lists of nit-pickings for most groups (including my own).  From watching the presentations today and going last, I really didn't think we would win 1st or 2nd.  I remember thinking, "wow.  These people are stepping up our game.  I feel kind of out of my league".  Honestly, it dialed up the pressure for me.  I was more nervous for this presentation than the in-class ones, and now I realize a lot of that was from watching a full round of them first to see what everyone else had brought to the table.

I hope the other groups don't take the opinion of the judges too seriously though.  I personally found some of the questions from the judges to the groups (including mine) to show a lack of understanding in some areas, or a difference of opinion on some things that I really didn't stand for.  After all, it's pretty hard to judge the quality of a software project/idea based on a 7-9 minute pitch.  It was nice to have good feedback from successful "adults" (whatever that means), though.

Monday, May 12, 2014

Ahh, a living project!

Nice feeling.

We're pretty much done now - everything is working, website is being (re) launched, and we're good to go.  We rehearsed our final project pitch for tomorrow, which now includes a small section by David Strawn.  After that, we make the archive and review each other and ourselves to call it a day.

I'm excited to finally finish the presenting portion of the course tomorrow.  I'm a little dismayed at how much presenting the course has entailed, and find myself questioning it's value.  We had to give a solo project pitch to the class.  As a group, we've now given two 7-9 minute project pitches to the class, and will give a third one tomorrow.  When we were rehearsing today, I was thinking about the somewhat repetitive nature of this activity.  While we have tuned and tweaked our presentation, it feels odd to rehearse and give it so many times.  Perhaps that's a good thing, in that we should be prepared to give a solid show tomorrow.  But at the same time, I wish I had gotten more 'meat' from the class.  It has kind of felt like some kind of "graduating exam", where the class is more about everyone proving themselves to the professor and TA than it is about learning things related to software engineering and computer science.  I liked this aspect of the course at first (this "selection" and "competition" aspect), but after a while it has made me (and probably others) wonder what the course is meant to teach us about either CS or SE.

Friday, May 9, 2014

Thoughts on programming language debates

By and large, they're pointless.

Yes programming languages are an important part of computer science and software engineering.  But sometimes I get the impression a lot of people think programming languages == CS & SE.  I would argue that languages are an implementation of ideas from CS and SE (i.e. paradigms), and little more.  We get so wrapped up in the whole "this language vs. that language" thing that I think we tend to forget that the ideas behind a programming language are far more important than the language itself.

Let's take the classic C++ vs Java debate.  Personally, I prefer Java.  This is basically for two reasons:

1.) I find Java easier to debug.  For example, dereferencing a null pointer in C++ usually gives the classic SEGFAULT, with little information about why.  Java gives a concise NullPointerException, with a corresponding line number in the code.

2.) I tend to discover that Java has an easier-to-use and more complete set of libraries.  The libraries in Java are documented in a single place for the most part (I don't really see this with C++).  I also tend to find that the standard libraries in Java have more useful and easier to remember semantics than C++.

But let's notice something about these two characteristics: they are not really definitive of the programming languages themselves, but rather the system by which they are implemented.  The fact that Java is easier to debug arises from it being run in a virtual machine, rather than natively on machine hardware.  C++ could, in principle, be made to run this way to make it easier to debug.  Similarly, one could imagine a good samaritan coming along the remaking the C++ STL to be easier to use and understand.  The point I'm trying to make is that it's the sort of side features, not the actual language, that often makes for a determination of which is better.  If you compare the syntax and semantics of the core of the two languages, they are very, very, very similar in my opinion (such that I was able to pick up C++ quite easily from knowing Java).

If I may make a car analogy, what people generally like about and use to distinguish between cars are external features.  Things like the interior (even it's freakin color), dash displays, and how the car looks on the outside.  Performance of the vehicle is important to some people too, but generally only from an outside view - most just don't care what the engine actually looks like on the inside, since they have no reason to look.

Wednesday, May 7, 2014

New programming habit, courtesy of Mr. Strawn

After reading some of David Strawn's project code, I decided I really like the way he comments things.  He does stuff like this:

/* Setting up bla bla */
int x = ...
int y = ...
int arr[] = ...
/*******************/

/* Compute such and such */
arr[0] = x + y;
.....
/*******************/

At first I found it kind of strange, but I actually really like this methodology.  The result is truly readable code.  The /****/ blocks serve to show the end of a particular "logical block" of instructions.  This segmentation is way better, in my opinion, than just the sort of side comments you normally see at the end of a sparse number of code statements.  I've been using this style for commenting and documentation for the code I write at work (Sandia), and I think I will continue to.  When I show it to other people, they seem to be able to follow it pretty easy by virtue of it being segmented into kind of high-level events, rather than just a dense set of program instructions.

Monday, May 5, 2014

Reflections on class wrap-up

The class has been a mix of interesting as well as frustrating.

A lot of the time, the grading scale seems arbitrary.  Well, I guess it is arbitrary.  Probably by design.  There are some consistent deliverables, like blog posts, but even these do not have very specific guidelines for length or content.  Much of the work of the class has a very vague set of evaluations associated with it.  This seems kind of realistic, given my small work experience in a professional software environment (Sandia), but for the class it's rather frustrating.  You never really know where you stand.  Do I have an A?  B?  Are there things I could be doing better?  On a related note it sometimes feels like you're getting graded badly for things that you can't really fix; I know a few people have said they felt they were getting an unfair shake because their project idea "wasn't original enough", or something to that effect.  On the one hand, it seems cool to give winning projects credit for their 'impact', but what about if you don't have a winning project?  Is that entirely a product of the quality of your pitch and proposal?  I think not.  It's largely a function of the value or appeal of your idea, which is true in life as well, but makes the class feel like you signed up for "Invention 460" or something.


Discussion: CS vs SE

As far as the discussion today about CS vs SE, I think many good points were brought up.  I would generally agree that CS is more about ideas while SE is more about products.  But I think everyone should remember that you can't have many products without solid ideas.  Some people in the class seem to have the sentiment that SE is some distant relative to CS at all, if they're even related.  Things were said like "I had to learn things for this [class] project that I wasn't taught at UNM", as if to say their training here in CS doesn't help them at all in building software.  I just see so much wrong with this sentiment.  I would argue that if you handed a workload like this class presents to a 1st-year CS student, they would crumble.  Sure, we've all had to learn a lot to execute these projects, like new programming languages, web frameworks, database systems, and so on.  But there is a ton of knowledge in the background that we use all the time when we are designing or coding these software projects.  From algorithms all the way down to just basic code quality and conventions.  There's a reason you can't take an average Joe off the street (or an MBA, for that matter :) ) and get them to do this kind of work.  There is a lot of little skills and implicit knowledge all over the place about how to understand and work with computers, how to design robust and flexible code, how to keep your cool under frustrating or confusing conditions, and so much more.  In closing, I would argue that CS and SE are not disjoint fields at all; they each have regions that do not overlap (highly theoretical CS research, Facebook, etc), but I would argue that their overlap is greater than their disjoint areas.

Thursday, May 1, 2014

A few words on unit testing

In short: I'm infected.

The more I use unit testing (mostly JUnit, but I'd like to get into it for C++ and Python), the more I like it.  It not only gives me a frame of reference for the correctness of code, but also helps me understand the purpose of sections of code more concisely.  I've even become a sort of automated testing evangelist, wanting to spread it's teachings to others (like at my job).  It appears that much of our woes in the software world come from a lack of acceptance testing; we think a piece of software is correct, though we've only used it a few times in ways we probably already know it works (these cases were tested throughly during development, most likely).  I'm starting to think more and more that automated acceptance/unit tests should be a requirement for every software project of significant size.  It's kind of a way to make an implementable version of the software specification and requirement documents.  Bugs are found faster.  Stress goes down because developers are less fearful to make changes or refactor, because they have a tool which can verify the correctness and functionality of the code base quickly.  I think my next step for my unit testing journey is to actually write some tests BEFORE I write the program that solves them.  It sounded crazy when I first heard of such a thing, but I think I'm ready :)

Monday, April 28, 2014

Critiques on 2nd round, 2nd set project pitches

I'll just give my general thoughts on each group, some of which I may have already said in class.  This is mostly just from my memory, but serves as a set of impressions.

Automaton:

Good:
Game looks freakin sweet.  Animations and color scheme on it are vivid and engaging.  Presentation was better organized this time around.

Not so good:
I think they really, really need to take the intro to the game to a more basic level.  They kind of just jump in and say "there's all kinds of buttons and stuff here" and describe a small subset of them.  I think it leaves the audience confused about the game, even though we're computer scientists (the subject the game is supposed to teach).  I think it would be wise to leave out the more complicated puzzles and instead do a very thorough puzzle that explains each piece of the solution and it's function.  I guess it's late for this, but I also think it would be a lot less overwhelming to new players if many/most of the buttons are disabled or made invisible in the early puzzles.

Visual Scheduler:

Good:
Once again, their product is looking mighty fine.  This group is on track to win "project of the year", in my opinion.  Their tool is clearly useful, has an easy interface, and has a clear market.  Hell, the people in the class would probably use this thing if it were to stick around.  It definitely seems to beat the existing MyUNM facilities.  The idea raised about asking the audience for classes to add is very worthwhile.

Not so good:
Their presentation improved, but they could still use more rehearsal; sometimes it seemed like they weren't clear on where to go next.  Overall, I found the presentation to be pretty strong though.

G.E.R.A:

Good:
Ben gave a stronger speech this time.  He was pretty organized and to the point, without really looking back at his slides for reference (something I personally need to work on).  One thing I really liked is how Justin was following along with Ben's talk and updating the powerpoint presentation in the background without being asked.  I'd like to implement something similar for our next presentation.  Their demo went pretty well overall and the site looks good.

Not so good:
The fumble by Ben, of course, but it wasn't such a huge deal.  I do agree with Ackley that another member of the group (probably Justin) should have tried to provide an assist there.  I also agree with one of the class members that the demo should involve completion of some kind of mission, as it's the core unit of work for the application.

Wednesday, April 23, 2014

Reaction to 'leaders only' client meeting

It wasn't so much as a meeting as it was an interview.

I had thought that I was going to be seeing both Nikan and Ackley at the same time; not sure why I thought that, since the time slots were unchanged (and thus they could only schedule one at a time).  There were a set of questions asked of me, which I expected.  Some were about the project timeline and how closely it was followed; having looked at the original team timeline recently, I was pleased to see we actually were in sync with it pretty well.  Others were about the team dynamics; overall, I've been quite happy with my team and the work they do.  And yet other questions were about my experience as a leader and what I would do differently.  I think my main thought on the last one was that I should have kept better tabs on my team members' assigned tasks and how they were being implemented to avoid ambiguity, confusion, or stalling.  I do think that overall I've been a pretty good team leader and I'd like to think my group is happy with my performance.

I'm now wondering what the last two class sessions will look like (Monday after next and the one following).  Are we going to do more group presentations?  I would hope not.  Are we going to have lectures like when the class began?  My best guess is that were going to have lectures/discussions about the experience of the class or something like that, but who knows (except Ackley, I guess).

Monday, April 21, 2014

Reflection on the presentations today

I think we did really well today.  When it was first announced a week ago that we would be doing the presentations again next week, I wasn't quite sure what we should change for the next one.  Overall, we received pretty good feedback from both the students and Ackley.  But clearly, as shown by today, there was room for improvement.

One thing that I think made today's a lot better was our use of Prezi rather than PowerPoint.  This made the presentation more visually appealing, and having a few more slides really helped me to stay on track, I think.  It was only a few days ago that I decided both me and Alan should have speaking roles in the presentation.  I wanted to change the demo so that there was kind of a "user-version" (the one I gave), which just showed how the application functions in normal use.  Alan then gave the "backend-version", which gave more in-depth info and demonstration of some of the behind-the-scenes intelligent choices the application makes.  I think this kind of two demo approach really improved how we got our message across.  When done this way, the audience gets to see both of whats above and under the covers.  Alan did a great job explaining the graph search we use and it's purpose, I thought.  There were some questions about other stuff we could have added to the presentation; I would like to add more stuff, but don't think we can really afford to do so with a 9 minute hard limit.  I kind of wish that limit would get extended to something like 13-15 minutes.  Perhaps it will be a little more forgiving for the final presentation since we have like 2 hours (rather than 100 minutes for 6 groups).

I think the other groups I saw today also improved.  I really liked PowderAde's presentation.  Kishore did a super good job at explaining similar apps, what they do, and what they don't do that PowderAde does.  The style of his presentation was spot-on, I thought; kind of snarky, but in a good, business-shark kind of way.  David Strawn was suggesting that we could do something similar by showing what your average car repair online forum looks like (they're pretty hideous), as it's about the only thing we could compare our application to.  After all, part of the inspiration for this project was the belief that car forums are very useful for all of the communal car history knowledge they collect (common causes of certain problems on specific cars), but that their knowledge base is vague and hard to quantify.

Saturday, April 19, 2014

Presentation plans, project next steps

I'm hoping the next project pitch will be noticeably better than the first.  To this end, we are going to use Prezi (which is pretty freakin sweet) rather than plain 'ol PowerPoint.  Our demo is now a 2-man operation: I will run through a diagnostic as a user (no graph display or anything on the side, just answering questions like normal), then Alan is going to use a combination of the program and a Prezi show to show how his greedy graph search makes intelligent choices.  Overall, I think our presentation is going to be more smooth and interesting with the use of Prezi as a guide in the background.  I also think it's going to help me stay on point and get all of my points across.

On the project side, we're chugging along, but I'm growing more concerned about the amount of stuff we're slated to do before projects' end.  We have user logins, but need to add security to them; also the actual functionality of user accounts (such as saving the location of a user in a traversal) is not implemented.  We also said we are going to add a mechanism for users to add comments to QuestionStates; I have written the storage backend for that, but the UI has not been written (it has been started).  We still do our development and demos running locally, and I'm getting worried that a deployed version of the site will have unforeseen issues.  We have deployed and ran it before, but not steadily.  I think these things are all going to work out, but I guess this is one of those parts of the process where the pressure and stress builds.

Monday, April 14, 2014

Reaction to 2nd set of product pitches

If there's one thing that today's festivities drove home to me, it's that organization is probably the most critical part of a successful (or at least coherent) pitch.

Parts of the pitches (today as well as last Monday) seemed to lack solid organization and planning.  This led to wasted time, poor format, or repeated material.  For example, one team had two speakers, but there appeared to be little coordination between what each was to talk about during the pitch.  Each speaker basically went over the same set of points.  In another pitch, it became clear the presentation hadn't been rehearsed for time when they burned through their material in about 4-5 minutes of what was supposed to be a 7-9 minute talk.  There were also instances where a group member would answer a question or state something about the project, only for another group member to chime in with a contradiction a moment later.  That last blunder isn't so much an issue with presentation prep as it is an issue with the team being on different pages about the project plans.

It's kind of hard for me to tell if my team's first presentation suffered from these errors.  I know we didn't have a timing problem, as our pitch ran for about 8 minutes without any filler or time-wasting.  I'd like to think the pitch we gave was orderly and on point.  There wasn't really a chance for our team members to contradict each other, because I did almost all of the project speaking; other team members just introduced themselves and chimed in as appropriate for questions.  A couple of the teams also lacked a solid conclusion, a flaw I'm well aware our presentation exhibited.  But to be honest, besides giving a conclusion that's as strong as the introduction, I'm not sure what else to work on for the next presentation.  I guess that's something I should ask Ackley, Nikan, one of my team members, or someone in the class about.  Cause I'm sure the last pitch wasn't perfect (as nothing is).


Friday, April 11, 2014

Meeting notes and next steps - 4/11

At our meeting today, we basically agreed that there are a few major action items we need to complete before the end of the semester.  Almost all of these were part of the original project proposal (seems so far back now), and the rest have come up in client meetings or as part of a product pitch.  In no particular order, these are:

1.) User accounts - to save location in a current traversal to come back to later, at the very least.  Saving the set of previous diagnoses would be really nice too but has a lower return on investment for the time.
2.) Ad placement - part of the revenue stream.  At the last product pitch (and in the proposal), we stated we were going to use targeted ads.  We need to research an API for this to make our lives easier, and implement it.
3.) Backtracking - this is nearly done.  Allows user to cancel a question answer and return to the question again.
4.) Email box - the display is present, but not functional.  This constitutes the "contact us" portion of the website for when users have a comment or issue to report with our diagnostic procedures or general things.
5.) Additional diagnostic charts - this is mostly my area.  As of yesterday there are full charts for two topics in the database - cooling system and starting issues.  We need at least a couple more, I'm currently thinking one for brakes and maybe one for vehicle noises.

I think this is all doable in the remaining time, but we need to keep it up.  If we sag in the next week or two (which is possible, as many classes have re-upped their workloads from the spring break cool-down), we could get into some trouble.  Again, all of these features are really required for us to claim that we successfully completed the project described in the proposal.  We would also like to do a few other UI things to enhance the UX if we can.  Some of these features are directly related to UX though, such as user accounts, backtracking, and the email box.

Wednesday, April 9, 2014

Client meeting notes - 4/9

Both Ackley and Nikan (our clients) agree: we need to focus more on the user experience (UX).

To this end, Sonny's action item for the week is to start implementing (and hopefully get close to finishing) a user account interface for our site.  There are two main features this will give to our users.  The first is to allow users to save their position in a graph traversal and come back later using their login; this feature is crucial, because some QuestionStates may require the user to go away for some time as they fiddle with the car.  It's very important that they can come back and pick up right where they left off.  The second is to save the successful traversals (diagnoses) that a user has performed.  This feature is not as critical, but would be very nice for the UX.  This will give them a kind of service history for their car, showing issues that have come up with it and how they were resolved.  These could form the basis of a simple set of maintenance records for a user.

Alan is going to create a sort of demonstration that shows his graph algorithm in action.  This will help the UX by showing them the types of things MechanApp does in the background to help them that they don't necessarily see in the UI.  I think the end result of this task should be some kind of short video explaining the intelligence behind the system that can be watched from the MechanApp.net homepage.

David is working on backtracking still (along with Sonny), but it should be finished soon.  This is another critical function that will allow the user to cancel an answer they have selected and go back to the question again.  This is not only important if the user accidentally selects the wrong answer to a question, but it also will make the user feel MUCH more in control of the process, rather than some kind of semi-passive actor as they are now.

My main action item for the week is also related to the UX.  I am adding at least one more full graph to the database, maybe two.  Content is becoming an issue.  Our demos and pitch are going to quickly lose steam if we are always going over the same diagnostic, "won't start".  Adding some diagnostic procedures from different systems on a car (like brakes, cooling system, etc.) will show how flexible our system is.  It will help to show that our system is a general framework for auto diagnostics, and not just a 'bag of tricks' of some kind.  And it will definitely make the system more usable.  Alan even said his Jeep is having some cooling system issues, and I'm hoping that adding a chart related to that could allow us to make him and his car the guinea pig for a real-life usage :)

Tuesday, April 8, 2014

Presentation reflection

I think that, overall, we did pretty good on Monday.

It's kind of hard to tell if you're getting your points across during a presentation sometimes.  I felt like I was explaining everything the way I meant to (mostly), but there's usually that doubt anyway.  A couple other students in the class said the presentation was very clear, so I hope that's true.  Our website looked good, as did our one-sheet handout (thanks Sonny!).  I think the team looked pro; we were well-dressed and gave good introductions.  The precise description I gave to everyone of the presentation plan really, really helped.  It made everything smooth and predictable on our end because everyone knew already what was going to happen (or, at least, I did :) ).

I found Ackley's criticisms to be accurate.  I'm going to talk to David or Sonny about adding some kind of field in the database like a 'title' for each QuestionState, which can be displayed in a larger font.  That would help with presentations and demos, and also make the program feel more cohesive, I think.  That's a pretty simple addition for a good return on investment for the User eXperience.  Ackley also said the presentation didn't have a clear ending, it just kind of 'fizzled out' (or something like that).  He was right, and that was a result of not correctly anticipating where I wanted to end the demo and not quite knowing what I was going to say to conclude.  What I really needed was a quick review and summary to wrap it up and reiterate the main points.  Next time...

Saturday, April 5, 2014

Meeting notes and work log - 4/5

We had a pretty productive meeting today (about 3 hours).

We selected a new background image for the website.  Sonny is going to do the facelift we desperately need so it will identify more with our target audience (hobbyist mechanics).  I am planning to get together next week with a friend that's into photography to take a custom photo for our background.  I'm thinking something with my car, hood up, some tools laying around.  You know.  Mechanic type stuff.

We also discussed some bugs and new features we will implemented.  I showed the group my newly redesigned "no start" diagnostic tree, which they liked.  It has a higher branching factor so that it can be better pruned by Alan's prioritizing search, and has a consistent node numbering scheme (increasing in left-right depth-first search).  I implemented and double-checked (hell, quintuple-checked) the database record for this new tree; it is currently in a separate copy of the database though, because a bug has arisen from the new structure (the code was supposed to work with this structure, but has shown it doesn't currently).

As far as presentation materials, I filled out most of the Business Model Canvas so we could use it as a reference to make our one-sheet.  Sonny is working on the one-sheet with the highlights of that BMC document.  I made the short Powerpoint presentation (two slides - title/intro and technical components) and it's ready to go.  I now need to rehearse my talking points (I'm the main presenter for our pitch on Monday) and talk a bit more with the group about the flow (in particular, with my 'driver' Sonny).  I'm going to make note cards for this purpose; I saw a few people use them for the individual product pitches way back, and it seemed to help.

Friday, April 4, 2014

Meeting and task notes - 4/4

We are taking the things discussed in our client meeting and putting them into action.

Our next feature to be implemented is graph backtracking, where people can cancel the answer that brought them to a present state and go back to the previous one.  To me, this feature is just critical for the user experience.  If someone makes a mistake, or doesn't like where the process is leading them, they need to be able to take a bit of control and retreat to some previous state.  This feature doesn't seem like it will be too costly, and will add tremendous value to our application.

Tomorrow, the team and I will meet for a code review and clean up session.  We have a list of issues and things we intend to fix during this time.  Some of these are for cleanliness, others are for robustness.  For example, we are looking to change the way my code casts objects from MongoDB documents to POJO's tomorrow to be less dangerous.  This will also be a good opportunity for everyone to look more closely at the other components of the project so we can all better understand the other modules we may not have directly worked on.

For tonight and before the meeting tomorrow, I'm mostly working on the presentation materials for Monday.  I filled out most of the Business Model Canvas and made the technical slide.  Tomorrow, the team and I are going to use the BMC as a guide to making our one-sheet handout for Monday.  Additionally, we need to discuss presentation planning and format and rehearse it a bit.

One day at a time I guess.  Weekends are for psychology students.

Wednesday, April 2, 2014

Client meeting reaction - 4/2

Whew.  That got a little heated.  In a good way, though.

Ackley told us some stuff we really need to hear about that state of our application.  Things that are much harder to see from the 'inside' as the development team.  Things like user experience and what a new user sees when they first come to our little world.  It was tempting to get defensive about some of the criticisms, but I think they mostly had merit.  I'm actually glad we can get honest feedback like that.  Per today, we are going to change the look-and-feel of the page; it currently has a kind of soft, open-road type of theme, which we will change to a more "mechanics and garage" type style (this is, after all, a mechanic's tool, no?).

There are also some usability features we desperately need.  One that came up today is the ability to backtrack manually in the graph search (i.e. cancel a selection and go back to a previous one).  To me, this feature is totally critical - users need to be able to have more control over where they are going.  We could also use more info at each state about why that state was selected - e.g. "Since you said the plugs are not sparking, we are going to check...".  Something as simple as that additional content would greatly help our user experience.

And then there's that search box.  A thorn in my side from day one.  We (somewhat reluctantly) agreed to add a search box for users to enter stuff into to try and parse out the issue they wish to diagnose with their car.  While it could be a good feature, it currently is unbalanced on it's return on investment (ROI) in my mind.  The drop-down boxes are a good interface, one that the user should understand, and one they have to use if they, for instance, want to buy auto parts online anyway.  So, I think for now we are going to scrap the search box.  Ackley is correct that, at present, it's more of a 'trap' or confusion to users than a help.

Overall, it was a good meeting and I think we got some good feedback as well as bad.

Sunday, March 30, 2014

Work log - 3/28-3/29

Added some functionality to automate some database content addition.  In particular,  I wrote some code that will go through all of our QuestionStates (diagnostic graph nodes) and fill in corresponding Vehistory (value data for some particular car, used for greedy search) objects in the database for some given Vehicle.  I'm in the process of adding new unit tests for this code and I just updated some other unit tests for code that had changed in a couple ways.  In writing this code to script the addition of the Vehistory objects, I realized how ridiculous it was that I was adding so many by hand before.  For our last client meeting/demo, I added a full diagnostic chart to the database and needed corresponding Vehistory objects for each node in that graph (a minimum of around 35 in this instance) - so much copy-and-paste!  I guess what they say about that is true - if you're doing copy-paste stuff a lot while you're developing, you're doing it wrong.  Very wrong.  

Thoughts on hearing the talk by the head of Van Dyke

I was mainly surprised (though I really shouldn't be) by how many processes and rules the guy has to create and manage people (rather than say, technology and such).  I think as undergrads in Comp Sci, we have this impression that the hard part about this business is technical problems like programming and algorithm design.  In class on Monday, the speaker was making me think about how much of a people field software engineering really is.  If you think about it, we are in one of those areas where we often (hell, usually) don't produce a physical product, per se.  Rather, we produce an intellectual product, like a team of story writers.  When you think of it that way, you realize that software engineering management has a very different set of challenges than, say, running a paper mill.  It seems to me that a guy like Van Dyke spends his management energy almost entirely towards making sure his people are happy, productive, well-placed, and have what they need.  That sounds like the job of any manager, but a software manager appears to have less of the other normal responsibilities, like ordering inventory and raw materials or determining hours of operation and sales discounts (if one managed retail, bleh).  Van Dyke had a lot to say about ideas he cooks up or reads about and then implements with his team, like 'Bootleg Friday', a policy where every Friday his developers can work on what they feel is most important at that point (and not necessarily their main project).  His discussion of 'time buffers', which allow developers to have some room for error in their time estimates for tasks, goes to the same purpose; making his people happy and productive, yet accountable.  It seems clear that the reason Van Dyke Software is still around is because the guy who runs it understands that his most important (and in some sense, only) asset are his employees.  It's the same reason we're always hearing about the ridiculous perks employees at places like Google receive for working there.  Like Ackley always says, "software engineering is something that people do".

Wednesday, March 26, 2014

Client meeting notes - 3/26

Ah, a decently working demo :)

Today, we showed Nikan a demo of the user selecting a car type and symptom (though currently we only have one symptom chart implemented, that the car will not start) by either searching for it or using the provided drop-down menus.  The program then displayed a question/request to the user for some kind of diagnostic test (like "does the car's engine try to turn over at all?"), and there were buttons for each possible answer to the question.  In this particular diagnostic chart, every question is "Yes-or-No".  Through this interaction, the user (Nikan) was able to go through the chart in a simple way to try and diagnose the vehicle.  The user interface looked good, the program ran smoothly, and has real-life data being utilized.  It's looking like our core functionality is getting close to complete.

We still have much to do.  We haven't yet covered the case of when a chart doesn't solve a users' problem (they run through it but say nothing worked).  We wish to add some kind of form for this purpose that will email us with the complaint of the user.  On my end, there are database functions and tasks which I need to automate (such as adding initial Vehistory (Vehicle-history) objects to the database).  There is also content we need to add.  We have one full diagnostic chart with about 35 states in it; I envision that by projects' end, we will have something like 3-5 of these to provide a fuller picture.  There is also an idea of providing links to YouTube or Wikipedia at steps in the diagnostic process a user may find confusing.  This may be a good idea, with a good return on investment, but I'm currently more concerned with adding basic functionality and framework then straight up-content (same goes for new diagnostic charts).  Overall, I think the project is on schedule and we're doing pretty good.

Sunday, March 23, 2014

Work log - 3/23

I've added a full, real diagnostic chart to the database.  It guides a person through trying to diagnose why a car won't start.  It has a whopping 33 nodes/QuestionStates in it.  I'm planning on having at least one more done by our client meeting on Wednesday, maybe two.  I also spent some time today and yesterday updating the data model documentation and the implementation of it.  In particular, I added stuff so that when a user selects a symptom to diagnose, the program will pick the most likely entry point into a diagnostic point if there are multiple places to start.  Unfortunately, some of these changes affected other code which has to be (lightly) refactored in a couple places, and also made some of my unit tests either not work or no longer constitute a valid test of correct behavior.

Friday, March 14, 2014

Project notes - week of 3/10

We did a semi-successful demo this week at the client meeting I didn't attend (slept right through the damn thing).  The demo showed a user selecting a symptom from a drop-down menu, which then accessed the database (MongoDB) and returned the relevant test/question to be posed to the user.  It was kind of rough, but all of the pieces were there.  My database query code seems to be working well, but I need to add some new features to it for the next week.  In particular, there will be a couple updates to the database schema model:

#1: Symptom objects will now contain a list of possible entry states into a diagnostic chart, rather than only one.  This will allow my database query code to find the most promising node to start at (by looking at vehicle history data).  The change in the schema will be simple (as databases like MongoDB have no 'schema', per se), but the change in the code will be slightly more involved.  I'm going to refactor a method that finds the maximum value child node so it can be used to find the maximum value entry state node for a Symptom object.  I don't anticipate that being super difficult, though.

#2: The database schema needs to support more vague queries for a Symptom object or entry state node based on input from the text search box.  I'm still thinking about the best data structure for this purpose.  The current idea is something like storing a big list of "hint" strings (variations on exact ones like "won't start") which can be searched against, and that will map to Symptom objects.  This has less to do with my query code and more to do with what's in the database for the server code to search against.

So much for spring break.

Saturday, March 8, 2014

Work log - 3/8/14

Whew.  I've got a working set of functions for querying the MongoDB knowledge base component of the program.  The actual query code is decently compact (about 200 lines).  There are also 4 object classes that define the 3 database collections/objects, plus an additional one for convenience in the query code.  Each of those classes is around 75-100 lines, but largely filler stuff like getters and setters.

I decided I really needed some unit testing for this code.  I am a believer in the value of unit testing, but I often don't practice it myself.  I decided to change this by doing comprehensive unit tests for all of the functions used elsewhere I wrote as part of the KBQuery class (6 in total).  The unit testing code is quite large itself, comparable to the actual code it tests (~150 lines versus 200).  That's quite a bit extra to add, but I found several juicy bugs that would have been nasty 'gotchas' through the process of writing and executing those tests.  I think without the tests, my code would still have those bugs until they were discovered in some roundabout way from the other project components calling my stuff.  I think it's gonna be much better to make my stuff as robust as possible before trying to integrate it among the team.

Wednesday, March 5, 2014

Client meeting reaction - week of 3/3/14

Today's client meeting with Professor Ackley sort of felt like deja vu.

     We got to demo some stuff (mainly the web UI), but a lot of the things we wanted to demo were cut short.  Both Alan and David had some stuff they wanted to show but didn't get around to.  I wanted to explain why I chose MongoDB over the other two database systems I researched.  Part of what took so much time is that we were again talking about that damn text search box again.  The web UI has drop-down menus so a user can select their vehicle info (year, make, model, etc.) and a symptom they are having.  The text box is merely another interface to populate these text boxes; if you enter a search in the text box that leaves out critical info (e.g. symptom), the user will be prompted for the additional info.  I have suggested to the people on my team working on this section (David and Sonny) to implement the text box ASAP so we don't have to keep explaining or justifying it's presence and purpose.  I think it will show itself as a simple apparatus once it's functional and we can put this issue to rest, hopefully.

     For next week, everyone on the team is working on their interaction with the other components of the code.  In particular, we are all writing some kind of description of the events that occur during operation (what I'm calling an "interaction spec") and beginning to implement the described behavior.  In my personal case, I'm looking to complete the software detailed within my spec by the next client meeting (I already have a good chunk of it completed as of today).  I'm hoping the interaction specs and related software will make the bigger picture of what the project will look like and how it will function become much more clear for future client meetings, as well as within our group.

Work log - 3/5/14

I finally got in the zone around 3:30 today and churned out a good-sized chunk of code.  I wrote 3 classes to represent the database schema objects in the Java Play server code.  I also figured out how to connect to my remote MongoDB database and query it.  Furthermore, I wrote a couple of the search methods the other members of the team need to use to get info out of the database.  I was initially bogged down today by trying to use some Play-oriented MongoDB plugins, which I couldn't figure out.  I made a breakthrough when I downloaded the regular MongoDB Java driver package (jar file) instead.  I was also having trouble seeking out and using some kind of ORM - Object Relational Mapping - setup to directly translate the MongoDB documents into Java object representations.  After a while of being frustrated with that, I decided it was easier (and I will claim more efficient and with less complexity) to just write a method for each database object class (3) that translates a MongoDB query object (com.mongodb.DBObject) into my custom objects.  I have emailed my team about this progress, as I know they need some of this functionality to continue with their own sections of the project.  I'm sure they will have more methods to request and some input on additional parameters or outputs for the ones I've written.  Feels good to get some working code out.

Monday, March 3, 2014

The Joel Test: Part 2

More on "12 steps to better code".

#7: Do you have a spec?

    This is something I've been harping about for like a week now.  We have several components in our system (such as database, web front end, etc.) and have been rather informally discussing how they will interact with each other.  It's been causing a lot of ambiguity and confusion about which parts do what and how they go about their business.  Related to this test, I want my team (myself included) to write what I'm calling 'interaction specs' this week - detailed descriptions of how their section(s) of the project will interact with their neighbors.

#8: Do programmers have quiet working conditions?

     This is another one that doesn't really apply to us, since we are not a full-time development team that works in a common location.  I think we all do have quiet working conditions (such as our homes or the library), so I think we can get a pass for this one.

#9: Do you use the best tools money can buy?

     We are probably failing this test, but that's because we don't feel we should be spending money to make a school project (which will likely never draw revenue).  We are still-using good tools such as IDE's (Eclipse, IntelliJ), database management services (MongoLab), and source control (BitBucket).  However, all of these tools are free.  I don't know that we would be better off with any paid tools than we already are.  Hard to say if we aren't shopping for anything that costs money.

#10: Do you have testers?

     Ah, another test our team must fail because of the business environment it is in.  We are the whole team, so obviously there are no testers.  I think Joel makes a solid point here, that many groups pay programmers to do relatively unskilled work like software testing.  I myself have been tasked with doing such testing at my work; it can be helpful to know about software, but generally, you could recruit most computer-literate people to do it.

#11: Do new candidates write code during their interview?

    This test makes me cringe!  I haven't had to write code during an interview yet (I've only had one tech job interview, at Sandia).  I can't tell if it's a reasonable thing to do or not.  I think there are people around who write great, great code but do it at a different pace.  Some people take their time and write it well the first time, with minimal debugging.  That approach would make them look slow-to-code or perhaps unskilled in most "write me some code that does X" style interviews.  I'm not convinced about the validity of this test.

#12: Do you do hallway usability testing?

     While our team doesn't have offices (or any such shared space), I do want to implement some things like this.  The only issue is that our UI is only one component of the code.  I do feel, however, that this kind of test is related to things like code reviews and having to explain to each other what your code does and how it's structured.  It gives you another pair of eyes and sometimes explaining it helps you see the process more clearly (this is sometimes called "rubber duck debugging" : http://en.wikipedia.org/wiki/Rubber_duck_debugging).

The Joel Test : Part 1

There's a software engineering blog I have read on and off for a while, found here:

http://joelonsoftware.com/

A fellow at my work recently retired, and left some books behind.  Among them was a book by this same guy (Joel Spolsky) called "Joel on Software".  I was particularly interested in an article both present in the book and at the following link, called "The Joel Test: 12 steps to Better Code" :

http://www.joelonsoftware.com/articles/fog0000000043.html

It outlines 12 litmus-type tests that, according to him, indicate the health and productivity of a software development team.  I'm planning on following most of this advice with my own team, and have already implemented some of it (though some parts remain).  I'll discuss the first 6 tests here, and the latter 6 in a later post.

#1: Do you use source control? 

    This kind of seems like a no-brainer for a team project to me.  I've never done a team project without some kind of source control tool.  Sounds like a friggin' nightmare.  For this project, my team is using Git in conjunction with BitBucket.

#2: Can you make a build in one step?

     I feel as though this test will be a challenge to implement for us.  We are using several different technologies, including a remote MongoDB instance, Java code in conjunction with the Play web framework, and web stuff like HTML, CSS, and Javascript for the front end.  I intend to insist we can pass this particular 'Joel test'.  If we need some list of 15 things to re-bundle and create the software project, that's gonna stress everyone out and bog us down near the finish line.

#3: Do you make daily builds?

     I actually don't think this one applies to us as much.  For one, this project is not a full-time job for us, so daily builds are not really necessary.  However, I do think that once we are settled into our basic code base and our making improvements and adding features to it, weekly or bi-weekly builds would be nice.  Note that this test can really only be feasible if you can pass test #2.

#4: Do you have a bug database?

   This is a tool that we had not discussed before I brought up this article to the team.  I have seen bug-tracking programs in action, and have become a believer in them.  I was originally going to use a separate one (e.g. 'Buggle'), but it was pointed out to me that our BitBucket has a built-in system.  We'll see if that does the job when we get deeper into development.

#5: Do you fix bugs before writing new code?

     I will demand this from the team as much as possible.  If you're trying to add features on top of buggy code, you might as well be building a skyscraper on quicksand.  You'll also burn time debugging things that are the fault of already-buggy code, rather than the new code.

#6: Do you have an up-to-date schedule?

     We are currently failing this test, in my opinion.  Our schedule and milestones are evolving so rapidly that the written version is simply inaccurate.  I will be working to fix this soon.

Tests 7-12 to come in a later post.

Wednesday, February 26, 2014

Client meeting reaction and project log (week of 2/23)

Our client meeting went MUCH better this time, or at least it felt that way to me.  Since our last one, we did some soul searching and had some important conversations on design and implementation plans.  This time we also had documents to back up our story - we had a more complete user story with a website mockup from Sonny, Alan provided a detailed description of the graph search we intend to use, and I provided a comparison summary of the database systems I had researched.

Going forward, we are trying to make some skeletons and prototypes.  Alan is going to write his basic setup to perform greedy graph search on diagnostic charts with edge weights (as well as managing and updating those edge weights when solutions are reported by the user).  David is going to stand up two web frameworks, Django (Python) and Play (Java/Scala), so he can compare them.  Sonny is going to provide us a basic website like the one his mockup depicted.  Finally, I am going to stand up a remote MongoDB database and work with Alan on a schema for it which will play nicely with his component of the system.  The database will store diagnostic procedures and information about which causes from those procedures are the most likely (i.e. are reported most often).

I feel a bit better about the status of our project than I did last week.

Sunday, February 23, 2014

Some musings on databases and the NoSQL vs. SQL debate

So I've been doing some research into what kind of database this project will use for the knowledge base component.  There may be a separate database for managing user accounts, which is a much simpler functionality.  The short of it is that I'm thinking MongoDB is a good fit...

One of the main things I've been trying to decide is whether we want to do SQL or "NoSQL", and what are the real strengths and weaknesses of those two 'classes' (if you want to call them that).  Note that NoSQL actually stands for "Not Only SQL", and some NoSQL databases have SQL-like query languages but use different representations and organization for data.  What I'm generally finding (and have seen this in limited experience with systems such as MySQL) is that SQL databases, or more generally relational databases, are not all that good at encoding graphs nor hierarchical objects.  This is a natural weakness from the explicit use of tables as the fundamental concept.  Also, there's often a disconnect between how most object-oriented programs represent data and how that can and is represented in a relational database; this difference is sometimes called "impedance mismatch".

On the other hand, I'm liking what I find about NoSQL databases, and in particular document-oriented databases such as MongoDB.  Instead of rigid tables and pre-defined schemas, a document database stores data in formatted files, which can be plain text but are often binary for efficiency.  There is, of course, a syntax to these documents but there are generally not rigid requirements on what each object must or must not contain.  This kind of flexibility allows similar objects to store things if they need them and omit them if they don't.  I find this strength to be in direct contrast to the column concept in SQL-type database systems.  You often see those types of database tables with some columns that are usually NULL - this is either because most records simply don't need or have that data, or designed into the schema in the beginning and remains because it's harder to remove it than to keep storing all those NULL's.

A key aspect of this decision is that we will have some complex data structures in the database.  Per the work Alan Kuntz is doing, we will be storing graphs in this database representing diagnostic procedures with edge weights reflecting frequency of problem occurrence.  I am deeply concerned that using a relational database for this purpose will cause us nothing but heartache and pain.  By contrast, I believe a procedural step object (node) in a system like MongoDB could look something like this (pseudocode of course):

object node_step
{ "instructions": "check X"                                 // what's displayed to the user
   "id": "12345"                                                   // a key into this object
   "connected": "12678,78912,56742"                // connected nodes, could also point to Edge objects
   "value": "0.45"                                                // frequency of occurrence (based on repair history)
}

By contrast, I'm not all that sure how this would look in a relational database like MySQL.  I guess we would have a table for node_step, with the columns shown above, and the field "connected" would be a set of numbers (there may be an inherent flexibility about the number of connections right there) that are foreign keys into other entries in node_step.  To me, that document-oriented style just seems so much more suitable.  MongoDB uses BSON, a binary encoding of JavaScript Object Notation (JSON).  JSON is a language-independent data format that plays nicely with Java, Javascript, and many other languages.  The format of JSON is basically like that node_step object above.  I am feeling like the choice of this technology is a critical one because this knowledge base is essentially the core functionality of this application.

Friday, February 21, 2014

What it's like to have your project picked

It's both a great and a worrying feeling.  Questions start rushing through your head.  Do I actually know what I'm doing?  Is my proposal doable?  Is this team going to complete the job with me at the mast?  I'm hoping the answer to all of these questions is YES.  I feel grateful that I think I've been assigned a good team.  I believe the guys in my group are solid.  I won't let 'em down.

To Mechanapp!

Sunday, February 16, 2014

Final project selections

Well, I did it.  I made my final project selections.

Thinking back on it, this was a pretty cool road that got us here.  We all had to cook up an idea, define and refine it, and try to sell it to each other.  I'm genuinely impressed at what some people came up with, and I'm excited (and a bit nervous) to see what gets picked and where I get assigned.  I'll admit that most of my project picks were heavily influenced by the presentations.  A notable exception to that was Automaton, a proposal for a computer science educational game by Luke Balaoro.  His presentation was good, but not quite enough to make me note-to-self to go read his proposal.  I ended up reading it anyway, and wow.  Good stuff.  I hereby give the 'best overall proposal' award to Luke.  It was really well-written, gave a lot of solid detail, and seems like an interesting idea.  I didn't rank it highest in my preferences only because I'm not sure how much I want to work on a game this semester (don't think it's really my forte).

Good luck to everyone on the vote next week.

Friday, February 14, 2014

Thoughts on pitches and effective 'project marketing'

So we wrapped up our project pitches today.  I think mine was good, but there's that usual lingering feeling that I could have been better prepared and organized for it.  I'm noticing something interesting as I go to narrow my preferences for project assignment down to 5: the in-person pitches I've heard over the last couple days are a surprisingly strong influence to that process.  As I watched these presentations, I was noting projects I found interesting to further research and consider for my preference list.  I've grazed through the relevant proposals, but found myself looking at ones I hadn't noted to pursue during class time.  What I'm finding is that some of those 'edge' projects are actually quite interesting, and a couple of them are possible candidates in my list.  It's kind of funny to note how much a two-minute speech can sway you for or against a project in comparison to a detailed 15-pager.  I suppose the speech format is more effective at manipulating our emotions (i.e. "do I like you?  do I believe you can lead this project"), while the actual proposal paper appeals more to our logic and reasoning (i.e. "he seems to have everything planned out well").

Sunday, February 9, 2014

Proposal Review for Brandon Lites

Review of proposal: "Ambient Algebra"

Proposal author:  Brandon Lites
(blog: http://blitescs460.blogspot.com)
Reviewer: James Vickers (jvick3@unm.edu)

Proposal restatement
         The proposal is to make a set of mini-games which teach college students algebra concepts when played.  The project seeks to address high failure rates in college math courses and low proficiency of students.  The games will be accessible online and the site will track user progress and provide facilities for leaderboards and achievements for players.

Reviewer reaction
         As a former math tutor at CAPS, I know first-hand many of the problems this article discusses.  Many students are not motivated to learn math early on.  They actually can get quite interested if the topics are presented to them in more relevant ways.  I think educational games are a good way to do this, if they can be made appealing enough for college-age students.  I, like many others, have learned skills from games.  I learned to type at a young age by playing educational typing games.

Quantitative scores

Format: 4
            Overall, the format is good.  I would consider trimming down the previous work section.  Some of the information included there does not appear relevant to the proposal.  The budget and timelines could be nicer (the budget should probably be in a spreadsheet or table rather than the way it's displayed).

Writing: 4
            The writing style is clear and concise, but the paper needs a proofread and polish - some sentences are missing words or have the wrong word if you read them aloud.

Goals and tasks: 4
            The timeline lists each member for 3.5 hours for the first two weeks, but at least 10.5 hours per person for each subsequent week.  Sounds like a risky slow start to me.  Otherwise, the timeline and its milestones seem reasonable.  I like how the timeline has a min-max range for hours worked each week.

Scope: 3
            Project is stated to be a supplement to mathematics instruction throughout the proposal.  However, at one point it is stated that "Ambient Algebra is designed to replace a student's homework in which they solve problem after problem".  I think this single statement may be a dangerous overreach of scope for this project.  This would likely cause backlash from universities, and it may not be best for students to practice for exams and quizzes in a totally different format (game vs. on paper).

Plausibility: 5
            Project appears perfectly feasible, and the author clearly identifies the technologies to be used.  There is, of course, a serious challenge to be had in making a game both fun and educational.  I think this may be amplified by the fact that the game is targeted for college-age students; I think marketing may be a key factor in getting these students to want to play games of this nature.

Novelty:  3
            Early in the article you say that, of existing educational math games, there are "none in which learning algebra is the secondary motivation of playing the game" (page 2).  Later, on page 4, you go on to say that "there are websites that offer games to teach algebra".  As a reader, I read the first statement as a claim that no game websites for math education existed (which I was skeptical of).  The second statement acknowledges the other games and explains the differences between them and your proposal.  The main novelties of the idea are a different target audience (college-aged instead of grade-school aged) and the use of leaderboards and achievement tracking.  It's not clear to me if the second novelty exists elsewhere already.
        
Stakeholder identification: 2
            Students (the main users) are identified as the major stakeholder.  The United States as a nation is sort of an implicit stakeholder in the article, through the discussion of its dismal test scores.  I think more should be said about some other key stakeholders, namely universities (who may suggest the site for students or even make donations of time/money to it) and people or groups that sponsor students (such as scholarship foundations or parents).

Support and impact: 3
            The project will charge a fee of $10 per semester for access.  The budget section of the proposal claims that "With around 1400 students taking this course each semester, we can assume a revenue of $14,000 dollars per semester."  I find this statement way too optimistic.  You can hardly expect every kid in a math class to buy the correct version of the textbook and a calculator as it is.  This claim also forgets that the problem it seeks to address (high failure rate of these classes) will also work to invalidate this projection - many students drop in the first 2-3 weeks from a lack of motivation or self-confidence.  The pricing model suggested may or may not be appropriate, as similar educational game websites instead collect revenue from advertising and do not charge their users any fees.

Evidence:  4
            Your motivation section (II) is SOLID.  Giving stats on the failure rates of early algebra classes at UNM and the relative scores of the nations of the world really highlights the issue your project seeks to address.  The budget could perhaps use a little more break-down and thought.  For example, programmers are going to be paid $35 per hour (when the national average is more like $45), and the workstation for the project manager costs twice as much as for a team member (though it's not clear why that is).

Challenges and risks: 4

            The main challenge discussed is making games that are both fun and educational.  Another one mentioned is making sure the games are relevant to common areas of struggle for students.  The only gave this section a 4 because I think another one exists that should be mentioned: convincing instructors to get over biases they may have about educational games so they may recommend this one for their students.

Wikipedia: software design patterns

     First off, I find it interesting that the article quickly states that many software design patterns are object-oriented (and therefore involve explicit state), and are not very applicable to function programming paradigms.  I'd like to read more about the design paradigms that functional languages are used with.  It seems like object-oriented design is sort of at the heart of many (or most) of the design patterns listed in this article.
     I think the fact that design patterns are not directly implementable (i.e. are not software specs) is both a great strength and a weakness.  This trait means they are flexible and more abstract than actual software specs or prototype programs, but there may be some ambiguity in making an implementation of some patterns.  The implementations may vary across languages and platforms to a degree that it can be brought into question if they still reflect the design they were trying to adhere to.  But after all, I guess that previous statement is a general one about the common gap that exists between software design and implementation.

Some interesting design patterns:

Bridge: "decouple an abstraction from its implementation allowing the two to vary independently".  To me, this sounds like a description of technologies like the Java Virtual Machine (JVM). 
Iterator: "Provide a way to access the elements of an aggregate object sequentially without exposing its underlying representation".  I love design patterns like this one.  When well-made, iterators are a nice abstraction that allows you to loop through a set or list without thinking about where things are or how they are stored.
Lazy initialization: "Tactic of delaying the creation of an object, the calculation of a value, or some other expensive process until the first time it is needed..."  I think most of the students in this class will think of Haskell when they hear this, except in that language it's called "lazy evaluation".  Though I think it's mostly an efficiency thing, lazy initialization is also cool because it allows for some flexible data structures (such as 'infinite' lists in Haskell).
Proxy: "Provide a surrogate or placeholder for another object to control access to it".  I think this is a very common design pattern.  For instance, some large software systems have a kind of 'manager' module through which all reads and writes to a database must be handled.
Lock (parallelism): "One thread puts a 'lock' on a resource, preventing other threads from accessing or modifying it".  This functionality is often critical in multi-threaded applications to prevent screwy behavior.  The MPI library for C++ and Fortran offers many ways to 'lock' a resource for a thread.