Currently there is almost no research in the area, so you would have to narrow it down using heuristics.
The Old Way
Last year, I somewhat informally narrowed down the options until I chose to run a study on handing out educational leaflets. The rough steps I followed were to:
Ask experts and Executive Directors of charities what studies would be valuable.
Look at interventions with promising preliminary findings. In this case, leafleting already showed positive results in one study.
Rule out the ones that are too costly or would take too much time.
The New Way I have recently been thinking of a different system that may work better than this. It basically boils down to using the heuristic of neglectedness a lot more, and would look more like the following:
Check which animal populations are the most neglected. This can be determined by looking at total funding per animals killed. Animal Charity Evaluators (ACE), thankfully, already has this data in a neat chart. This step would suggest that farm animals are the first area to look at.
Assess which broad interventions the farm animal donations have supported: for example, political lobbying, vegetarian advocacy, meat alternatives, etc. Compare this to their broad accomplishments, like passing laws, persuading people to become vegetarian, or increasing the sales of meat alternatives. This information is not easy to acquire and I am not sure what conclusion it would yield.
Narrow the broad interventions down to more specific actions and find out how much funding goes into each. For example, in outreach, specific actions might include flyering, documentaries, events, etc. Then, compare this to the achievements in each of the areas. For outreach, this information could be acquired via surveying vegetarians about what convinced them to switch over. This has already been done by The Humane League Labs.
Based on this you would be able to get a sense of which interventions seem to have an oversized impact per cost. You could then design a randomized controlled trial that would be feasible cost/time-wise and test what kind of an impact the intervention had in a much more thorough way.
By using this system instead of the one I used before, I might have gotten the same result, but, even at worst, I would have also gotten more of a sense of what other areas might be worth testing. At best, I might have found areas that looked considerably more promising than flyering. For example, if documentaries have both a larger effect and have less funding than vegetarian flyering, it would be better to test the effects of screening a vegetarian documentary instead.
It is important to look at the pros and cons of a method, and here are some of the potential flaws with this system:
I can imagine it being worth putting more time into checking out all the broad interventions. Legal advocacy, veg outreach, and meat alternatives might all look promising and be worth investigating at a more specific level.
It might be better to simply determine what areas work on average, instead of figuring out the best specific intervention.
This method might take so long that it would not be worth the time.
That being said it seems like a possibly interesting way of determining what specific interventions would be worth running a randomized controlled trial on in an area that has very limited evidence so far.