Hey Guys,
I am having problems passing a TableInfo to a PipelineJob within the ms2 module. If I pass it a null in lieu of the TableInfo I get a small null exception but everything runs smooth otherwise: the job's run() and done() get called, the job shows up in the queue list and logs fine etc. When I do pass the TableInfo (a SpectraCountTableInfo) I get a humungous exception, several labkey errors along the lines of "failed to set job status to X," and my system slows to a crawl for at least several minutes. Nonetheless, once this long process is complete, the job gets ran and the TableInfo behaves as you would want: I access it from within the job, it's fine.
So my question is, if I want to pass a large chunk of data to a pipelinejob, do I use TableInfo, or something else? The pipelinejobs I've been studying for reference all deal with File objects, but that doesn't really suit my needs (I think)... I want to pass a (possibly large) piece of data from the database to a job so it can get processed while I sit and watch it in the queue... I was going off what I found in the Demo module as far as using the TableInfo. Don't see what the problem is. Maybe I'm missing something. I'm working off a fairly recent (a month or so) trunk checkout, on xubuntu with java 1.6.0.
Thanks for reading,
Nate
Insillicos
Here's an outline of the relevant code:
public class PepcAction extends RunListHandlerAction<SpectraCountForm, QueryView>
{
.... etc ...
public ModelAndView getView(SpectraCountForm form, BindException errors) throws Exception
{
.... etc ...
TableInfo table = schema.createSpectraCountTable(_config, getViewContext(), _form);
if (null != table)
{
PepcPipelineJob job = new PepcPipelineJob(getViewBackgroundInfo(), table);
service.queueJob(job);
}
.... etc ... |
|
jeckels responded: |
2008-09-12 13:32 |
Hi Nate,
PipelineJobs should all be serializable. This is important because many jobs can be executed on different machines and we need a way to transfer them around. Additionally, they may have to sit in a queue for a long time and you don't want them to hold onto a lot of in-memory data. In your particular case, it sounds like your job will always run on the web server, but it still needs to be serializable.
What this means is that you should pass a job the information it needs to look up data, instead of the TableInfo itself. For example, you might pass it a list of run ids, a URL that includes the filters and sorts, etc. Then, inside your job when it starts running, go ahead and create the TableInfo. You may need to create a different MS2Schema.createSpectraCountTable() method that doesn't rely on the ViewContext directly, since that class won't be serializable.
Hope this helps.
Thanks,
Josh |
|
|
|