banner I have been a Lego League coach since 2007. This year, I wanted to document the season to give rookie coaches a resource to help them through to competition. The process can be intense, but it can also be a lot of fun for you and your team.

I hope to cover enough through my posts, but if I leave anything out, please feel free to leave a comment, or contact me.
Oct
25th

View Caveat: The Light Sensor

Author: fllCoach | Files under Programming

In my last post, I talked about the View feature of the brick that allows you to see what the sensors attached to the brick are reading. There is one important caveat regarding the light sensor that you need to be aware of.

When you use the View to see what the light sensor is reading, it will likely range between 30-35 for black and 60-65 for white. These are non-calibrated values. If you calibrate the light sensor (which you should), these values will not change. That is, the View only shows non-calibrated values. In order to see the calibrated values, you have to write a small program to display the values the light sensor is reading. Or you can download one.

Nxtprograms.com has a good calibration program that you can download that sets the black and white values to 0 and 100, respectively. The zip file also has a program that shows what the light sensor is reading. Use this program to see the light sensor values if you use the program to calibrate the light sensor.

I just learned of this last week. It would have helped me greatly when we were trying to diagnose problems with our light sensor last year. We ended up working around our problems. Hopefully, by understanding this caveat, you can avoid the confusion I had.


16 responses. Wanna say something?

  1. Dean Hystad
    Oct 26, 2010 at 13:11:46
    #1

    My team only used View to find out if a sensor or motor is plugged into the correct port (it is very hard to trace wires in their robots). Other than than they find View to have no value.

    As you mention, View is a source of great confusion if you calibrate your light sensor. I think teams do themselves a disservice using View to program duration for a Move block. The team may never learn the relationship between motor duration and move distance, or motor duration and turn angle. If your programs use motor duration you are also locked in to never changing your robot design. A change in wheel diameter or robot track (distance between wheels) invalidates all your programs.

    I think a much better solution is to learn how to convert linear distance (measured in millimeters or inches) into motor duration. Write your own Move MyBlock that does the conversion, and write your programs using a tape measure. The biggest benefit, other than knowing a lot more about robotics, is that now your programs are protected against change. Decide to use bigger wheels to go faster and save time? Just change the conversion factor inside the MyBlock, and all your programs magically work again.

  2. fllCoach
    Oct 26, 2010 at 14:30:47
    #2

    This is a very interesting idea. How do you give inputs to MyBlocks? My team just created a “Finds Black Line” MyBlock at 25% power for a specific mission. But we need the same function at 50% for another mission. I wanted to send a “parameter” to the MyBlock to indicate power, but I couldn’t figure out how to do it.

    What you describe here is a very good idea, but it is also very advanced. I believe most of the coaches that come here are rookie coaches, so sticking with degrees for their first year is a good way to get started. They can move on to your more advanced methods (which I would like to learn more about) once they have a solid foundation.

  3. fllCoach
    Oct 26, 2010 at 14:43:45
    #3

    I found a simple tutorial on how to create MyBlocks with inputs as well as a very good detailed tutorial. This is very cool and I can’t wait to try it out tonight.

    Thanx for the info, Dean.

  4. Florian
    Oct 28, 2010 at 22:15:52
    #4

    Hi. I am a rookie coach. First I wanted to thank you for taking the time to share your experience in this blog. It’s been a great help. I also enjoyed Dean’s comments.

    But to your previous point, I agree that some of the techniques sometimes mentioned seem very advanced. What I always ask myself is how old are your team members Dean?

    I ask because, having a team of fith graders, I always wonder what is a reasonable level of expectation…
    Coach, in the three years you’ve coached have you observed big differences related to age?

  5. fllCoach
    Oct 28, 2010 at 22:28:34
    #5

    Florian

    Thank you for your comment and your kind words. I’ve only coached 4th and 5th graders so I don’t have experience beyond that age. I’ll be “graduating” to middle school students next year when my daughter moves on to middle school. However, I have heard that the ideas that come from 6th and 7th grade teams have a leap in complexity and creativity.

    I’ve found that you need to keep it pretty simple with 4th and 5th graders. We are using two light sensors to follow a line this year and I would say that she would not have been able to program it (with my help) without the experience of following a line with one light sensor last year. I want to teach her a few more advanced concepts, but while I believe she would understand everything to explain it, I don’t know if she could reproduce it from scratch by herself if asked to by a skeptical judge. Then again, she’s surprised me many times this season.

    My advice is to stick with simple motor moves, sensor detections, loops, and myBlocks. My team last year did very well sticking to those foundations.

  6. Dean Hystad
    Oct 29, 2010 at 14:30:33
    #6

    My girls were all 8 when they started. They’ll all be turning 14 in the next few months. They retired from FLL after last season and are mentoring a young all girl team this year.

    Our team was different. We were 40% play date, 40% science and technology club, and 20% FLL team. We spent time doing things like writing video games or playing robot sumo. We built working generators out of LEGO. Remember the wave generator you had to build for Power Puzzle? Ours actually generated electricity. We did lots of experiments. Most of what the girls built or wrote was never used in competition, but the concepts they learned were applicable.

    Our approach worked for us. The girls had fun while learning, and gained a lot of valuable life experience in the process. They are so comfortable with public speaking that they put on an FLL Orientation day at Boston Scientific all by themselves (I was called out of town for work). The local FLL facilitator said the presentation was so well organized and informative that it is now the template for all future presentations. And now my girls are passing on the lessons they learned. I am very happy.

  7. Gremlin
    Nov 15, 2010 at 23:47:06
    #7

    An observation on NXT light calibration. I am pretty certain this was true under NXT-G 2.0 and firmware 1.28; I think it holds under NXT-G 2.1 and firmware 1.31. It seems there is only one set of light calibration parameters used by the standard NXT calibration and sensor read blocks — so if you have multiple sensors they may behave differently from one another. Particularly if there are sensors which are different heights from the table the calibration can’t be right for both at the same time. Often it’s “close enough”. Hanging an unshielded light sensor sideways to figure out the “destroy bad cells” mission in Body Forward highlights the differences in sensor calibrations. There are ways around this by reading the “raw” sensor values and massaging them -but that is quite a bit more advanced. The team I coach is not using a color sensor; that may have its own calibration.

    About age appropriate programming: I think it depends on a blend of things — innate logic and math abilities, NXT programming experience (esp. including time spent deconstructing examples found online), other programming experience (like summer programs where kids code up online games) and an ability to focus. I agree that starting with bread and butter foundations is imperative. There is no sense in starting with a really advanced line follower unless they have coded up a “drive until dark”, a simple two state follower; then a three state follower etc. But some kids just “get it” and it’s all a green coach can do to keep up! This is my third season coaching and my eldest kid’s third season — he tosses things together without a thought this year that there is no way he could have done last year and no way he could even have dreamt to do two years ago. I don’t know if it’s age, experience or both.

    I’m coaching a team with ages 9,10,11,11,12 this year and there is a difference between where the kids are academically and socially. One thing the scoring rubric’s point out is that not everybody on the team has to be doing that same work. For example the 9 and 10 year olds both have a strong sense of mechanical design — one did Jr FLL last year and learned a lot from that experience. Son the elder (age 11) said he wanted to step away from mission programming this year after doing a lot of it for the past two years. His opinion was that the best part of programming is creating useful utility MyBlocks (“judges like those and besides if you have good MyBlocks then mission programming is boring”). He cranked out a bunch of utilities then he wanted to focus on research/presentation. So the two younger ones and the other 11 year old are doing most of the mission programming, but they have a strong arsenal of utilities. Could the “missions programmers” write all the utilities from scratch themselves? — probably not yet, possibly next year or the year after. (I don’t feel bad about this: could most professional software developers write an operating system from scratch? Not likely. ) The 12 year old doesn’t own a Lego! She got in mostly for the research project. That being said, she is lead on one small mission for robot game and whether she admits it or not, she has a knack for dissecting problems. It was fun a couple of weeks ago when she wondered away from working on the research presentation to the game table, looked at what the others were stewing over and said “Why don’t you just ….” which is exactly what they ended up doing.

    As to the play/robots ratio, my crew meets after school so they eat prodigious volumes of snacks rather than play –probably 20% snack, actual work 50-70%, general horsing around 10-20% and 5-10% “let’s go out to the big rope swing in the back yard” or play “king of the leaf pile”. This is probably the one area where I do think age matters a lot — if the whole team is young (or immature) then it’s a lot more like herding cats. If the work time is productive then 50% is awesome; if it’s not productive 150% is not enough. Remember folks, teamwork counts as much as everything else and deserves just as much coaching support as research or game programming.

  8. Manal rezk zedny
    Nov 30, 2010 at 08:14:42
    #8

    is the colour sensor have the same things in the light does it differs much ?!

  9. fllCoach
    Nov 30, 2010 at 08:32:57
    #9

    My understanding is that the color sensor works about the same as the light sensor, which is why they are allowing it this year.

  10. Dean Hystad
    Nov 30, 2010 at 16:39:54
    #10

    The color sensor doesn’t work like the light sensor at all. It has two modes; color and light. The color mode reports a color index (1 for black, 2 for blue….). When used in light sensor mode it has a Yes/No output that tells you the result of a comparison. Unlike the light sensor there is no intensity or raw output. There is also no way to calibrate the light sensor mode. The lack of calibration and an intensity output make the sensor almost useless as a light sensor.

  11. fllCoach
    Dec 3, 2010 at 11:03:56
    #11

    What I meant is that I read that the color sensor has no advantage over the light sensor in competition. So the organizers allowed it so you could use it as a light sensor. I’ve never actually used the color sensor, so I don’t exactly how it works in the NXT programming interface.

  12. Dean Hystad
    Dec 3, 2010 at 20:30:07
    #12

    I was answering the original question about how the color sensor differs from the light sensor. No reproach intended.

  13. John
    Mar 27, 2011 at 06:33:06
    #13

    is there any way to run color sensor as color mode and light mode in the same application ?

  14. fllCoach
    Mar 27, 2011 at 09:57:45
    #14

    I’m sorry, John, but I’m not familiar enough with the color sensor to be able to answer your question. However, you could go to the FIRST forum and post the question there. There are many knowledgeable people on the forums that could answer your question. the forum can be found at http://forums.usfirst.org/forumdisplay.php?f=24.

    Thanx for stopping by.

  15. Thas
    May 2, 2011 at 21:27:25
    #15

    Comment #7 (by Gremlin) is VERY important. My team of 10 year-olds are using three light sensors (yes, we bought extras!) with NXT-G 1.1. They spent an entire day going back and forth between calibrating each of the three sensors ­čÖü
    I was convinced that the same memory location/register is being used to store ALL light sensor calibration values. This post (#7) confirms my hunch. Thank you!!!
    BTW, thank you for the pointer to the line follower design. The calibration code is VERY useful for my kids. They don’t have to download calibration routine every time. They created separate calibration routines for each sensors and they keep these small code snipets as extra tools when going to a competition.
    Cheers.

  16. Thas Yuwaraj
    May 2, 2011 at 22:41:18
    #16

    As a follow up to my previous post. Here is the solution to dealing with single calibration for multiple light sensors (e.g. 3):
    – calibrate using the sensor that will need the MOST dynamic range (e.g. detecting white, black AND green)
    – use the ‘View sensor’ program described in the ‘Line Follower’ link provided in the original blog post. You will have to modify this program and create three different versions to ‘view’ each of the 3 light sensors separately.
    – use the values you read to determine your decision thresholds for the two sensors that were not calibrated.
    Hope this helps.
    Thas

Post a Comment