Sunday, March 30, 2014

Work log - 3/28-3/29

Added some functionality to automate some database content addition.  In particular,  I wrote some code that will go through all of our QuestionStates (diagnostic graph nodes) and fill in corresponding Vehistory (value data for some particular car, used for greedy search) objects in the database for some given Vehicle.  I'm in the process of adding new unit tests for this code and I just updated some other unit tests for code that had changed in a couple ways.  In writing this code to script the addition of the Vehistory objects, I realized how ridiculous it was that I was adding so many by hand before.  For our last client meeting/demo, I added a full diagnostic chart to the database and needed corresponding Vehistory objects for each node in that graph (a minimum of around 35 in this instance) - so much copy-and-paste!  I guess what they say about that is true - if you're doing copy-paste stuff a lot while you're developing, you're doing it wrong.  Very wrong.  

Thoughts on hearing the talk by the head of Van Dyke

I was mainly surprised (though I really shouldn't be) by how many processes and rules the guy has to create and manage people (rather than say, technology and such).  I think as undergrads in Comp Sci, we have this impression that the hard part about this business is technical problems like programming and algorithm design.  In class on Monday, the speaker was making me think about how much of a people field software engineering really is.  If you think about it, we are in one of those areas where we often (hell, usually) don't produce a physical product, per se.  Rather, we produce an intellectual product, like a team of story writers.  When you think of it that way, you realize that software engineering management has a very different set of challenges than, say, running a paper mill.  It seems to me that a guy like Van Dyke spends his management energy almost entirely towards making sure his people are happy, productive, well-placed, and have what they need.  That sounds like the job of any manager, but a software manager appears to have less of the other normal responsibilities, like ordering inventory and raw materials or determining hours of operation and sales discounts (if one managed retail, bleh).  Van Dyke had a lot to say about ideas he cooks up or reads about and then implements with his team, like 'Bootleg Friday', a policy where every Friday his developers can work on what they feel is most important at that point (and not necessarily their main project).  His discussion of 'time buffers', which allow developers to have some room for error in their time estimates for tasks, goes to the same purpose; making his people happy and productive, yet accountable.  It seems clear that the reason Van Dyke Software is still around is because the guy who runs it understands that his most important (and in some sense, only) asset are his employees.  It's the same reason we're always hearing about the ridiculous perks employees at places like Google receive for working there.  Like Ackley always says, "software engineering is something that people do".

Wednesday, March 26, 2014

Client meeting notes - 3/26

Ah, a decently working demo :)

Today, we showed Nikan a demo of the user selecting a car type and symptom (though currently we only have one symptom chart implemented, that the car will not start) by either searching for it or using the provided drop-down menus.  The program then displayed a question/request to the user for some kind of diagnostic test (like "does the car's engine try to turn over at all?"), and there were buttons for each possible answer to the question.  In this particular diagnostic chart, every question is "Yes-or-No".  Through this interaction, the user (Nikan) was able to go through the chart in a simple way to try and diagnose the vehicle.  The user interface looked good, the program ran smoothly, and has real-life data being utilized.  It's looking like our core functionality is getting close to complete.

We still have much to do.  We haven't yet covered the case of when a chart doesn't solve a users' problem (they run through it but say nothing worked).  We wish to add some kind of form for this purpose that will email us with the complaint of the user.  On my end, there are database functions and tasks which I need to automate (such as adding initial Vehistory (Vehicle-history) objects to the database).  There is also content we need to add.  We have one full diagnostic chart with about 35 states in it; I envision that by projects' end, we will have something like 3-5 of these to provide a fuller picture.  There is also an idea of providing links to YouTube or Wikipedia at steps in the diagnostic process a user may find confusing.  This may be a good idea, with a good return on investment, but I'm currently more concerned with adding basic functionality and framework then straight up-content (same goes for new diagnostic charts).  Overall, I think the project is on schedule and we're doing pretty good.

Sunday, March 23, 2014

Work log - 3/23

I've added a full, real diagnostic chart to the database.  It guides a person through trying to diagnose why a car won't start.  It has a whopping 33 nodes/QuestionStates in it.  I'm planning on having at least one more done by our client meeting on Wednesday, maybe two.  I also spent some time today and yesterday updating the data model documentation and the implementation of it.  In particular, I added stuff so that when a user selects a symptom to diagnose, the program will pick the most likely entry point into a diagnostic point if there are multiple places to start.  Unfortunately, some of these changes affected other code which has to be (lightly) refactored in a couple places, and also made some of my unit tests either not work or no longer constitute a valid test of correct behavior.

Friday, March 14, 2014

Project notes - week of 3/10

We did a semi-successful demo this week at the client meeting I didn't attend (slept right through the damn thing).  The demo showed a user selecting a symptom from a drop-down menu, which then accessed the database (MongoDB) and returned the relevant test/question to be posed to the user.  It was kind of rough, but all of the pieces were there.  My database query code seems to be working well, but I need to add some new features to it for the next week.  In particular, there will be a couple updates to the database schema model:

#1: Symptom objects will now contain a list of possible entry states into a diagnostic chart, rather than only one.  This will allow my database query code to find the most promising node to start at (by looking at vehicle history data).  The change in the schema will be simple (as databases like MongoDB have no 'schema', per se), but the change in the code will be slightly more involved.  I'm going to refactor a method that finds the maximum value child node so it can be used to find the maximum value entry state node for a Symptom object.  I don't anticipate that being super difficult, though.

#2: The database schema needs to support more vague queries for a Symptom object or entry state node based on input from the text search box.  I'm still thinking about the best data structure for this purpose.  The current idea is something like storing a big list of "hint" strings (variations on exact ones like "won't start") which can be searched against, and that will map to Symptom objects.  This has less to do with my query code and more to do with what's in the database for the server code to search against.

So much for spring break.

Saturday, March 8, 2014

Work log - 3/8/14

Whew.  I've got a working set of functions for querying the MongoDB knowledge base component of the program.  The actual query code is decently compact (about 200 lines).  There are also 4 object classes that define the 3 database collections/objects, plus an additional one for convenience in the query code.  Each of those classes is around 75-100 lines, but largely filler stuff like getters and setters.

I decided I really needed some unit testing for this code.  I am a believer in the value of unit testing, but I often don't practice it myself.  I decided to change this by doing comprehensive unit tests for all of the functions used elsewhere I wrote as part of the KBQuery class (6 in total).  The unit testing code is quite large itself, comparable to the actual code it tests (~150 lines versus 200).  That's quite a bit extra to add, but I found several juicy bugs that would have been nasty 'gotchas' through the process of writing and executing those tests.  I think without the tests, my code would still have those bugs until they were discovered in some roundabout way from the other project components calling my stuff.  I think it's gonna be much better to make my stuff as robust as possible before trying to integrate it among the team.

Wednesday, March 5, 2014

Client meeting reaction - week of 3/3/14

Today's client meeting with Professor Ackley sort of felt like deja vu.

     We got to demo some stuff (mainly the web UI), but a lot of the things we wanted to demo were cut short.  Both Alan and David had some stuff they wanted to show but didn't get around to.  I wanted to explain why I chose MongoDB over the other two database systems I researched.  Part of what took so much time is that we were again talking about that damn text search box again.  The web UI has drop-down menus so a user can select their vehicle info (year, make, model, etc.) and a symptom they are having.  The text box is merely another interface to populate these text boxes; if you enter a search in the text box that leaves out critical info (e.g. symptom), the user will be prompted for the additional info.  I have suggested to the people on my team working on this section (David and Sonny) to implement the text box ASAP so we don't have to keep explaining or justifying it's presence and purpose.  I think it will show itself as a simple apparatus once it's functional and we can put this issue to rest, hopefully.

     For next week, everyone on the team is working on their interaction with the other components of the code.  In particular, we are all writing some kind of description of the events that occur during operation (what I'm calling an "interaction spec") and beginning to implement the described behavior.  In my personal case, I'm looking to complete the software detailed within my spec by the next client meeting (I already have a good chunk of it completed as of today).  I'm hoping the interaction specs and related software will make the bigger picture of what the project will look like and how it will function become much more clear for future client meetings, as well as within our group.

Work log - 3/5/14

I finally got in the zone around 3:30 today and churned out a good-sized chunk of code.  I wrote 3 classes to represent the database schema objects in the Java Play server code.  I also figured out how to connect to my remote MongoDB database and query it.  Furthermore, I wrote a couple of the search methods the other members of the team need to use to get info out of the database.  I was initially bogged down today by trying to use some Play-oriented MongoDB plugins, which I couldn't figure out.  I made a breakthrough when I downloaded the regular MongoDB Java driver package (jar file) instead.  I was also having trouble seeking out and using some kind of ORM - Object Relational Mapping - setup to directly translate the MongoDB documents into Java object representations.  After a while of being frustrated with that, I decided it was easier (and I will claim more efficient and with less complexity) to just write a method for each database object class (3) that translates a MongoDB query object (com.mongodb.DBObject) into my custom objects.  I have emailed my team about this progress, as I know they need some of this functionality to continue with their own sections of the project.  I'm sure they will have more methods to request and some input on additional parameters or outputs for the ones I've written.  Feels good to get some working code out.

Monday, March 3, 2014

The Joel Test: Part 2

More on "12 steps to better code".

#7: Do you have a spec?

    This is something I've been harping about for like a week now.  We have several components in our system (such as database, web front end, etc.) and have been rather informally discussing how they will interact with each other.  It's been causing a lot of ambiguity and confusion about which parts do what and how they go about their business.  Related to this test, I want my team (myself included) to write what I'm calling 'interaction specs' this week - detailed descriptions of how their section(s) of the project will interact with their neighbors.

#8: Do programmers have quiet working conditions?

     This is another one that doesn't really apply to us, since we are not a full-time development team that works in a common location.  I think we all do have quiet working conditions (such as our homes or the library), so I think we can get a pass for this one.

#9: Do you use the best tools money can buy?

     We are probably failing this test, but that's because we don't feel we should be spending money to make a school project (which will likely never draw revenue).  We are still-using good tools such as IDE's (Eclipse, IntelliJ), database management services (MongoLab), and source control (BitBucket).  However, all of these tools are free.  I don't know that we would be better off with any paid tools than we already are.  Hard to say if we aren't shopping for anything that costs money.

#10: Do you have testers?

     Ah, another test our team must fail because of the business environment it is in.  We are the whole team, so obviously there are no testers.  I think Joel makes a solid point here, that many groups pay programmers to do relatively unskilled work like software testing.  I myself have been tasked with doing such testing at my work; it can be helpful to know about software, but generally, you could recruit most computer-literate people to do it.

#11: Do new candidates write code during their interview?

    This test makes me cringe!  I haven't had to write code during an interview yet (I've only had one tech job interview, at Sandia).  I can't tell if it's a reasonable thing to do or not.  I think there are people around who write great, great code but do it at a different pace.  Some people take their time and write it well the first time, with minimal debugging.  That approach would make them look slow-to-code or perhaps unskilled in most "write me some code that does X" style interviews.  I'm not convinced about the validity of this test.

#12: Do you do hallway usability testing?

     While our team doesn't have offices (or any such shared space), I do want to implement some things like this.  The only issue is that our UI is only one component of the code.  I do feel, however, that this kind of test is related to things like code reviews and having to explain to each other what your code does and how it's structured.  It gives you another pair of eyes and sometimes explaining it helps you see the process more clearly (this is sometimes called "rubber duck debugging" : http://en.wikipedia.org/wiki/Rubber_duck_debugging).

The Joel Test : Part 1

There's a software engineering blog I have read on and off for a while, found here:

http://joelonsoftware.com/

A fellow at my work recently retired, and left some books behind.  Among them was a book by this same guy (Joel Spolsky) called "Joel on Software".  I was particularly interested in an article both present in the book and at the following link, called "The Joel Test: 12 steps to Better Code" :

http://www.joelonsoftware.com/articles/fog0000000043.html

It outlines 12 litmus-type tests that, according to him, indicate the health and productivity of a software development team.  I'm planning on following most of this advice with my own team, and have already implemented some of it (though some parts remain).  I'll discuss the first 6 tests here, and the latter 6 in a later post.

#1: Do you use source control? 

    This kind of seems like a no-brainer for a team project to me.  I've never done a team project without some kind of source control tool.  Sounds like a friggin' nightmare.  For this project, my team is using Git in conjunction with BitBucket.

#2: Can you make a build in one step?

     I feel as though this test will be a challenge to implement for us.  We are using several different technologies, including a remote MongoDB instance, Java code in conjunction with the Play web framework, and web stuff like HTML, CSS, and Javascript for the front end.  I intend to insist we can pass this particular 'Joel test'.  If we need some list of 15 things to re-bundle and create the software project, that's gonna stress everyone out and bog us down near the finish line.

#3: Do you make daily builds?

     I actually don't think this one applies to us as much.  For one, this project is not a full-time job for us, so daily builds are not really necessary.  However, I do think that once we are settled into our basic code base and our making improvements and adding features to it, weekly or bi-weekly builds would be nice.  Note that this test can really only be feasible if you can pass test #2.

#4: Do you have a bug database?

   This is a tool that we had not discussed before I brought up this article to the team.  I have seen bug-tracking programs in action, and have become a believer in them.  I was originally going to use a separate one (e.g. 'Buggle'), but it was pointed out to me that our BitBucket has a built-in system.  We'll see if that does the job when we get deeper into development.

#5: Do you fix bugs before writing new code?

     I will demand this from the team as much as possible.  If you're trying to add features on top of buggy code, you might as well be building a skyscraper on quicksand.  You'll also burn time debugging things that are the fault of already-buggy code, rather than the new code.

#6: Do you have an up-to-date schedule?

     We are currently failing this test, in my opinion.  Our schedule and milestones are evolving so rapidly that the written version is simply inaccurate.  I will be working to fix this soon.

Tests 7-12 to come in a later post.