-
Notifications
You must be signed in to change notification settings - Fork 3
Project Ideas
During the build season, programmers often have down-time, when the code for the robot is outlined, but a physical robot is still weeks away. This is a great time for students to experiment with new ideas. These could either enhance our abilities, or make the programming/testing workflow more efficient. More importantly, students can learn a lot from digging deeper into a particular part of the code. Here are some ideas for projects to work on.
Nearly every year, we have some sort of lifter or arm, that needs to move to a set of fixed positions. This consists of a motor and an encoder. For example, in Recycle Rush, we had both a tote lifter and container lifter that worked this way. In Stronghold, we had a shooter that could tilt up and down, as well as a bar that could tilt up and down. The 2018 game will likely have something like this, either for manipulating power cubes or for climbing. It would be nice if we had a generic class we could extend for such subsystems.
Work on this project could procede as follows:
- Look through previous years code, to see what features are common to multiple years' robots. Decide on what use-cases your class should cover (e.g. should you be able to use a potentiometer instead of an encoder?), and what features it should provide.
- Based on what you decided in step 1, write down a list of public/protecteced methods that your class will provide.
- Create a new branch in hyperLib and write an implementation for your class.
- Test your class with previous years' robots. The 2016 bot is already set up to use HYPERLib, so you can modify that code directly. For the 2015 bot, you should just clone the quickstart project and work from there.
- If any problems arose in step 4, consider redesigning your class to make it easier to use.
- If everything works well, show an adult, and we can merge your code into the master branch of hyperLib.
So far, we've only used vision in 2017. The code from then was intended to be at least somewhat modular, since there were two tasks that used vision. The "generic" part of the code could take an image, and find green rectangles. The user could then override a method which took a list of rectangles, and returned a custom result type.
This is probably good in general. One thing that needs to change is that many of the values that were constants need to be controlled either by getters/setters, or by preferences. Also, some pipelines, like finding the largest target, or the one closest to the center, probably can be re-used every year. As we use vision for a second year, we should get a better idea of how the requirements can vary.
Right now, HYPERLib automatically generates diagrams for the OI and for wiring on every build. We should be able to do something useful with this information, like putting it on a website that the whole team (or at least electrical/pit crew/drive team) knows about. Of the two, I would say the OI Map is more interesting/important, since it changes more frequently, and because the technology involved in the solution would be more useful to learn.
On every commit to master, an updated diagram should be uploaded to some website that pit crew and electrical know about. This will probably be triggered via TravisCI. Ask James or Chris about how to do this. The most important part of this task is to communicate clearly with other parts of the team, so
Develop a plugin for the dashboard that displays this info for drivers. This would mean publishing data about how the controls are layed out over NetworkTables, and writing a plugin for the dashboard to take advantage of it. This involves learning about how to work with the OI class (ask James!), how to work with NetworkTables (also ask James, or Google), and how to write a plugin for the dashboard (nobody's done it yet, but that's because nobody has tried).
This is a more advanced project, that would probably involve working closely with an adult. In fact, it's actually two projects, but work on one will probably affect the other.
Right now, preferences get stored in a few places:
- In the code, we store default values for all preferences.
- On the robot, the file
preferences.ini
stores the current values of preferences. - At various times, we make backup files of the preferences file, and store this in various places.
- Occasionally, we will manually go back in the code and set the default values to the actual values.
This workflow is less than optimal, to say the least. To see why,
check out the desktop on laptop #3 or #4. It's completely covered in
files like preferences.ini-backup-7--before-competition
. Really,
preferences should be stored with the code. This solves three
problems at once:
- We can use Git to track changes to preferences, so there's no need to do so many manual backups.
- Changing out the roboRIO should be seamless, since the preferences should be deployed along with the code.
- There's no need to specify defaults in the code, since the preferences are already stored with the code. At compile/deploy time, we can have a script check that all the preferences are there.
A solution to this would mostly involve writing scripts to run at deploy time, to reconcile changes that happened on the robot and to the code, since the last deploy. We would need to think about how to keep track of which values changed where, and how to decide which values to keep.
To check everything is consistent at build time, we might need to change how we use preferences in the code, to make things easier for a compile-time script to check. Otherwise, we can check at run-time. Of course, catching errors earlier is always nice, especially when time with the robot is scarce.
Last year, we had a lot of preferences. Many of them have names
like GearDrive VisionMove P
. Really, we want this to be something
like GearDrive/VisionMove/P
, where preferences can be stored in a
system of sub-preferences. Both HYPERLib and NetworkTables are
already set up to do this. The main thing that needs to change is
that we need a better UI to edit preferences. This could be just a
change in workflow (try using OutlineViewer!), or it could involve
coding a plugin for the dashboard. Having the knowledge of how to
code a dashboard plugin would be very useful, even if it's not needed
here.
In methods that take commands, such as if/for/while, accept instead a
Consumer<CommandBuilder>
. Then we could write thins like:
builder.if(() -> /* check some condition */,
(builder) -> {
builder.sequential(/* foo */);
builder.sequential(/* bar */);
});
I would also recommend doing this for parallel()
. This doesn't give
us an immediate benefit, but it does make syntax a bit neater, and it
sets us up for a few of the later things on this list. For example,
it means that if we have commands built out of sub-commands, which are
in turn built out of sub-commands, we can know about this structure
all at once. Only at the very end is a "master command" created.
Currently, if you spawn a command in parallel, the entire group
doesn't finish until all parallel commands finish. Sometimes this is
desired, and sometimes it is not. In order to mitigate this, we have
a release()
command, which runs a "do-nothing" command on a given
subsystem, to cancel the parallel command. This is ugly and hack-ish,
and breaks down in a few corner cases.
Instead, I propose that we make two changes to how parallel commands work:
- Every parallel command is given a name/identifier. A parallel
command can end one of two ways:
kill()
orjoin()
. Killing a command ends it instantly, while joining to it waits for it to finish. CommandBuilder should require that every use ofparallel()
has a correspondingkill()
orjoin()
. - When a parallel command is spawned, it takes ownership of any
subsystems it requires. It should be considered an error to use
this subsystem again before the command is ended with
kill()
orjoin()
.
These changes ensure that commands always end at very predictable times. Change #1 forces the programmer to think about when this happens. Change #2 ensures that it doesn't happen at any other time.
From looking at WPILib, it appears that if a CommandGroup requires a subsystem, but is not using it at that moment, then no command is running on that subsystem. However, it would seem more desireable to have the default command run during this time. If we already track ownership of subsystems from the previous section, then it should be possible to detect this situation. If we do, we can either inject the default command, or we can just issue a warning.
Write a plugin for the dashboard that can print the entire graph of a
command made from commandbuilder. This means encoding this
information into networktables somehow, and having CommandBuilder
inject a command to send it when a new command starts. To monitor the
progress of commands, we can inject commands that update the state
before/after all child commands. If you want to get fancy, this
should also let us set breakpoints (again, by injecting commands to
wait where the breakpoints are).
Too much stuff to write now. I'll just say that probably this depends on better commands and better preferences, so we should prioritize those first. See the conversation in Slack for more.
This project would use lots of reflection, if that's the stuff you're into.