Adding Features and Testing Using Intel IoT Technology

Published Date
26 - Apr - 2016
| Last Updated
26 - Apr - 2016
 
Adding Features and Testing Using Intel IoT Technology

In our previous blog entries we introduced concepts in developing IoT architecture. Most notably we laid out a process for development that specified a pathway to production:

1. Define Problem
2. Identify/Design Solutions
3. Build Proof of Concept
4. Scaling up to Prototype
5. Add Features/Evaluate
6. Scaling to Production

In step 4 – Prototyping we envisioned several different levels of prototype, all working forwards to the eventual marketable product. However, we also stated the focus should be on the core product, so that you do not introduce so many points of contention that you severely dilute your product.

Many times, added features take so much away from the base, that it is hard to deliver efficiently the main solution.

So you may ask, why not integrate the adding of features, and testing within the prototyping phases. In reality they are there, but we felt it as better for those new to developing to see these steps broken out, so that it is easier to focus on the goals at hand. When you have to many items to deal with in a phase, all of them suffer, so we break out the add features section so that it can live, and breathe on its own. Besides, if you focus on a base prototype, and turn to adding features later, you can do a much better job accomplishing these types of goals. Additionally, by doing this step later, you have established that your software, and hardware functions amply, so that when adding a feature, you can do so much easier.

Baseline

The first aspect you want to accomplish, and enter into step 5 with is a fully functioning prototype. At this point, testing should enter your vocabulary in a big way.  Previously, we may have performed base testing, and have some evaluation, and performance figures, but now, we must take that testing to higher levels. By establishing a baseline at this point, any further additions, modifications, and tweaks can be evaluated in a number of ways.

Additionally, as part of your baseline, you should contemplate having a saved go to build. What that means is that this build includes the base sensor, design, and software. If you encounter problems, you roll back to the baseline build, and you start over in a so called fresh state so you can alleviate whatever caused issues. The baseline build, and code can also be used as troubleshooting in comparing your troubled code to known good functioning code.

How to Add Features

There are a few ways to look at the process of adding features. The important point here is to have a plan, and a baseline build as mentioned before. By omitting either or both, you make your development process much more difficult.  It is the rare occasion that unplanned changes work appropriately to their fullest extent.

The list or punch list method. In working with your previous testers, you may have looked at questioning them after use about what would improve your product. This list of improvements could be made into your feature add list or punch list. These changes, and flat out things you didn’t think of are invaluable to improving your product. You can look at the list in a top down method, organize it as the easiest to hardest to add, or evaluate each item for the value add, and then reorganize the list in that order.

The next method is to look at the core problem you are solving. If your base prototype solves that problem, what are adjoining problems or even data types that could be captured concurrently?  For instance, you could be monitoring water temperature, you could also measure flow, salinity, pressure or various other qualities as sensors are available. Effectively, you look to add based on functions in proximity.

The project plan. Yes, you remember the project plan from a few steps previously?  The project plan could potentially be your means to add features. You may have looked to build out on a critical path, or even a shortened path. Those two pathways may have prevented certain features from coming to light. At this step in the process, you have reached a logical point in the project plan path that including features outside of the chosen path could be allowed with little issues, and could be argued that it is not project creep.

Evaluation

There are hundreds of approaches to evaluation. To explain a great deal would be impossible, and evaluation is a college course in itself. We will look at some of the basic groundwork necessary in evaluation. This will allow you to measure all of your steps, or more importantly quantify the functionality of your product.

In one approach there are three basics to evaluation: reliability, validity, and sensitivity. Reliability seems straightforward, but it can be far reaching. It can be just uptime, or could be a counter for accurate measurement of a point in time. Reliability can be looked at as achieving the same results repeatedly over time.

Validity goes hand in hand with reliability. As reliability wanted the same results repeatedly over time, validity seeks to quantify those same results as what was intended to be measured. When the range of those results are too broad or too narrow, validity is more difficult to measure.

Lastly sensitivity is quite important. When your results have poor ranges, either too broad or too narrow, sensitivity is called into service.  If your sensor is incapable of the full range of temperatures possible, sensitivity is forsaken. Also, if the sensor is unable to adjust rapidly, you too will have a sensitivity issue.

In taking the three, reliability, validity, and sensitivity into account, building an evaluation scheme for your product, and feature testing should be clearer. Your metrics should be fairly specific, and look to quantify your efforts to prove functionality, stability, and whether you have solved a given problem.

Overall, a measured approach in regards to adding features will save you from inevitable setbacks in your development program. A solid plan, rooted with decent evaluation methods will push you towards completion with a firm basis of quantifiable functionality. Look to our final blog article – Step 6 Scaling to Production to help bring your project together for a great finish.

  • Learn more about adding features such as sensor networks:
  • The VTune Amplifier can be used in many ways for evaluation of performance data:
  • Debugging using the Intel System Studio.

For more such intel IoT resources and tools from Intel, please visit the Intel® Developer Zone

Source: https://software.intel.com/en-us/blogs/2016/03/04/adding-features-and-testing-on-intel-iot-architecture