Wednesday, March 29, 2006

The Great Robot Race - The DARPA Grand Challenge

Watched the program on the grand challenge on PBS. I found it immensely inspiring. Here is a link to the programs website http://www.pbs.org/wgbh/nova/darpa/ Also here is a link to Sebastin Thrun's website: http://robots.stanford.edu/index.html And here is a link to Whitaker's website: www.ri.cmu.edu And a link to the DARPA Grand Challenge website: http://www.darpa.mil/grandchallenge05/index.html

Tuesday, March 28, 2006

Estimate, guesstimate, pest-i-mate

Here is a very good post on estimating the amount of time it will take for a software project. I got it from: http://blogs.msdn.com/anutthara/archive/2006/03/28/562989.aspx

You are sitting in your office peacefully looking at the newly written specs for v1.1 of your product. You poke around at some code for that tool you always wanted to write - but the ship pressures didn't let you. As you glance at your inbox, there is an innocuous looking mail from your test manager asking you for an estimate of the test work for the components you own. What? How the heck are you supposed to estimate something that you know so little about? And just what is the basis of the estimate? This is so vague!

Hmm...we could start with this:

Break it down again… - Break down your component into smaller sub components that are separately testable. Further break these down into tasks that can be estimated in a fair way. For instance, consider an activity like creating sample repositories for migration in a PSConverter. You can split this into creating small, medium and big repositories. The big repositories can be further split into repositories for 10K, 50K and 100K actions. Split the medium and small repositories that way too – at the end you will have a set of tasks – each granular enough to estimate Déjà vu - Aw - come on, you've done this before. Activities that are similar to the ones that you have already done before can be estimated based on past experience. In the same instance as above, if you have created a 5K repository before, you can intelligently guess the time you would require to create a 10K or a 50K repository again. Use that experience wisely. Why God, why? - Against each estimate of a task, clearly document the justification for this time. If you have done the sub tasking part correctly, this part is a natural follow up. This documentation is as essential to the reviewer as it is to you. There is no way you are going to remember in M3 why you thought a particular activity would take 2 days when you have already spent the whole week on it and it is still unfinished. Call out caveats and special conditions clearly in your test estimate so that the reviewer is assured that you have factored these in into your estimate.

But as Chris Shaffer, our TF test manager would put it, a world class tester won't just stop at that. You have a few more tricks to deal with this fuzzy monster:

Bleeding edge technology please – We are not in stone age anymore - stop using crude methods to track your estimate. Get savvy - use Project or other tools to track your test tasks and work estimates. Not only do they supply useful features like sequencing tasks, marking start date and end date of an activity, graphically depicting size of tasks, associating tasks with custom tags etc, but are also much more maintainable and easily understandable. If tomorrow you move to work on a cooler project than your current one, the guy replacing you will have to understand your estimate, right? Back to square one - Iterate over and over your estimates regularly. Many new things may come to light as time goes by that force you to change estimates. A certain feature may expand its scope making you break it up into smaller tasks while another feature may altogether be cut leaving a gaping hole in your estimate. Don't obsess over task sizes and start estimating to the level of hours, minutes and seconds. In my opinion, a day is the most granular unit of time you can use to measure your estimates in M0. Where's my PM stick again? Now is a good chance to build more clarity into your spec just in case you missed out during the spec reviews. As you estimate your test work, you will have to think of the feature in a much more detailed manner, which will automatically reveal any gaps in the spec. Don’t sit on those – talk to your PM straightaway and bridge those gaps. A spec is a tester's insurance - hold it close to your heart. Buffer overruns are good for you – Allocate a good amount of buffer for unforeseen circumstances. Don't plan too close to the edges – you would certainly have missed something. Based on the ambiguity of the task, you may have to allocate anywhere between 30%-70% of buffer overrun to the task. Don't fret – your manager won't think you are a moron if you ask for 3 days to come up with a test plan for a simple feature – just make sure that you write a complete and well thought out test plan. Another thing to remember is consult your lead – she'll probably give a bigger estimate than you have, but will consider many other factors that you may have overlooked. The ultimate truth – All said and done, this is one of the fuzziest activities in the cycle and is guaranteed to come with a high margin for error. Stop looking for perfection in your estimates. Do your best to factor in all possible eventualities to come up with a sane and realistic estimate – but don't get depressed if you overshoot it later on – you'll get better with time.

Saturday, March 25, 2006

The Guerrilla Guide to Interviewing

An article by Joel Spolsky about interviewing candidates for a software job. A very good read. http://www.joelonsoftware.com/printerFriendly/articles/fog0000000073.html

Friday, March 24, 2006

Passing arrays from C# to Managed C++ : 2

After some more research, here is the best way to define the Managed C++ functions, so that you can pass them from C# easily. /* pass by reference Call from C# like this double dataValue = 0; mClassObj.ValueByRef(ref dataValue); */ void ValueByRef ([In][Out] System::Double *data) //instead of &data { *data = 100; } /* returning an array Call from C# like this double []dataArrayReturned = mClassObj.ReturnArray(); */ System::Double ReturnArray()[] { System::Double data[] = new System::Double[3]; data[0] = 3; data[1] = 33; data[2] = 333; return data; } /* pass as an out param Call from C# like this double []dataArrayAsOut; mClassObj.CreateArrayOut(out dataArrayAsOut); */ void CreateArrayOut([Out] System::Double (*data)[]) { *data = new System::Double[3]; System::Double tmp[] = *data; tmp[0] = 1; tmp[1] = 11; tmp[2] = 111; } /* pass as a ref param Call from C# like this double []dataArrayByRef = null; mClassObj.CreateArrayRef(ref dataArrayByRef); */ void CreateArrayRef ([In][Out] System::Double (*data)[]) //System::Double (&data)[]{ *data = new System::Double[3]; System::Double tmp[] = *data; tmp[0] = 2; tmp[1] = 22; tmp[2] = 222; }

Passing arrays from C# to Managed CPP (C++)

In a previous post I had written about how to setup a CPP function, so that it could be called from a C# function and the value be passed in as a reference. (hint: you need to declare the function like this : void ValueByRef (System::Double __gc & data) ) So another question was: How do I send an array from C# to Managed C++? Simple: the definition in MC++ is : void ProcessArray(System::Double data[]) {data[0] = 0; data[1] = 1;} (update: In the same way one can also send in a double dimension array : void ProcessArray(System::Double data[,]) ) And to call it from C# : double []dataArray = new double[3]; mClassObj.ProcessArray(dataArray); My next question is : How do I return an array that was created in MC++? In C++: System::Double ReturnArray()[] //observe the [] at the end! { System::Double data[] = new System::Double[3]; data[0] = 3; data[1] = 33; data[2] = 333; return data; } In C#, call the function like so: double []dataArrayReturned = mClassObj.ReturnArray(); Now we know how to pass an array to a MC++ function and also how to return an array from MC++ to C#. So the next question is : How do I send an array to MC++, have the array created in MC++ and return to CSharp? If you set up the function in MC++ like so: void CreateArray(System::Double data[]), and then initialize the array inside the function, then because the created array was local in scope, it will get destroyed once the function has finished executing. So what we need is a method to pass an array by reference: To do this you need to setup the function like this in MC++: void CreateArrayRef (System::Double (&data)[]) { data = new System::Double[3]; data[0] = 2; data[1] = 22; data[2] = 222; } and call it from C# like this: double []dataArrayByRef = null; mClassObj.CreateArrayRef(ref dataArrayByRef); Ideally though, I would like to call the MC++ function and send the array as an out parameter. Otherwise, C# will complain that I have not initialized the array. (Which is why I had to set the array dataArrayByRef to null) Thus to call a MC++ function like the following from C#: double []dataArrayAsOut; mClassObj.CreateArrayOut(out dataArrayAsOut); You need to setup your MC++ function like this: void CreateArrayOut([Out] System::Double (*data)[]) { *data = new System::Double[3]; System::Double tmp[] = *data; tmp[0] = 1; tmp[1] = 11; tmp[2] = 111; } Remember to include using namespace System::Runtime::InteropServices; at the top, which is where the Out attribute is defined. Hope this answers all questions about passing arrays back and forth between C# and Managed CPP (C++)

Thursday, March 23, 2006

Hashing Explained...

From : http://blogs.msdn.com/irenak/archive/2006/03/23/558838.aspx So, you hear all the time – Hashtable, hash value, etc. You’ve seen GetHashCode method in Object type. But what is hashing? Hashing is a process of applying one of many hash algorithms. It’s frequently used to create/use hash tables, which significantly increase search efficiency making it a constant time O(1) (compare that binary tree search, which is a function of O(logn)). A hash table is simply an array of data (the actual table where the data to be searched is stored) and a mapping function, known as a hash function. http://en.wikipedia.org/wiki/Hash_function site gives the following definition: A hash function or hash algorithm is a function for examining the input data and producing an output of a fixed length, called a hash value. Two different inputs are unlikely to hash to the same hash value. There are many hash functions. You might’ve heard of secure hash algorithms such as SHA, or other like MD2, MD4, MD5, etc. But to understand it, let’s look at two simple examples below (as Damien Morton correctly stated in the comment below, these are not good examples for "real-life" usage, but they do convey the point in an easy to understand way): public int DivisionHashCode(string data){ // Division method hashing int result = 0; foreach (char c in data.ToCharArray()) { result += c; } result %= 2053; // let's assume our max string is 2048 chars... find the next prime number return result;} public int MiddleSquareHashCode(string data){ // Middle square algorithm, where M = 128, k = 7, w = 1 int result = 0; foreach (char c in data.ToCharArray()) { result += c; result = (result*result)>>6; // Thanks to Damien Morton for the correction of a bug in the original post } return result;} The returned hash value is, in essence, the index into the array of data. So, instead of searching for a match by comparing each key (linear search), or by leveraging the powers of the binary tree search, we simply convert the key (string) to a hash value (integer), which gives us an index into the data table. Then, it’s a simple data[index] operation to get the value… While simplified, this, in essence, is how hashing works…

Thursday, March 09, 2006

3D World Simulation - Martin Baker

3D World Simulation - Martin Baker This site aims to give you all the information you need about 3D in a structured way

Tuesday, March 07, 2006

.Ordered Polygon Demonstration

.:: FastGEO - By Arash Partow ::.: "Ordered Polygon Demonstration" A demonstration of transforming a concave self intersecting polygon into a simple non-self intersecting polygon.

Monday, March 06, 2006

Passing values by reference from a C# dll to a c++ dll

I had a managed c++ function that would take for one of its arguments a double value passed in as a reference. void MCppFunctions::ChangeValue (double &dValue) {...} So how does one call a function like this from c#. If you try the obvious method MCppFunctions.ChangeValue(ref dValue), you will get a compile error. To fix this problem, all you need to do is change the cpp function definition to the following: void MCppFunctions::ChangeValue(double __gc & dValue) Once thats done you can call the function from c# and pass in the value and qualifying the parameter with the ref keyword.