-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tickets/dm 37305 #80
base: develop
Are you sure you want to change the base?
Tickets/dm 37305 #80
Conversation
a0df3d9
to
919dca3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello Patrick,
I'm finally sending my comments on this PR. It was a long way until I was able to understand the scripts (and all the infrastructure behind them, like the Base Scripts, the containers, the testing, CSCs, etc) and get to the point that I could make comments, but the path to it was also really rewarding and motivating. Most of my comments are in the spirit of making questions on how things will work and a few of them can be seen as suggestions that will hopefully be valuable ones.
Note that I'm submitting the review as "Comments" only. I look forward to seeing what comes next and to see how this script will evolve until it is ready to be merged.
Thanks,
David Sanmartim.
|
||
import asyncio | ||
|
||
from lsst.ts.externalscripts.auxtel.build_pointing_model import BuildPointingModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from lsst.ts.externalscripts.auxtel.build_pointing_model import BuildPointingModel | |
from lsst.ts.externalscripts.auxtel.latiss_take_flats import LatissTakeFlats |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of a commit suggestion is nice, but mostly not necessary. Incorporating commit changes in GH is sort of asking for trouble.
In the future, feel free to say "should be LatissTakeFlats
" and we'll fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. And sorry for that.
|
||
from lsst.ts.externalscripts.auxtel.build_pointing_model import BuildPointingModel | ||
|
||
asyncio.run(BuildPointingModel.amain()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
asyncio.run(BuildPointingModel.amain()) | |
asyncio.run(LatissTakeFlats.amain()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this is related, but the test_executable
method from test_latiss_take_flats.py
is failing in my container. I also noticed that this test appears as Fixed in Jenkins, although it keeps failing to me. More comments in the test_latiss_take_flats
module session.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this passes in my container and passes in Jenkins (as you said).... but i'll read your comments below.
) | ||
|
||
# Check that the LATISS filter/grating combination is valid | ||
if latiss_filter == "SDSSr_65mm" and latiss_grating == "empty_1": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if wouldn't be better to have a default sequence for each of the available filters at any given time (of course, after defining what the default parameters are). As the sequence might be really big, maybe this method should load the steps from a sort of a default_sequence.yaml
file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! That's the plan!
We need to establish configs for each filter+grating combination.
Question for bonus points. Where do you think those sequence files belong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. I see that a default sequence for the test cases lives inside the tests/data/auxtel. I can also see that there is a data/scripts folder for the executable scripts, but I don't see a similar data folder inside the main scripts folder (/externalscripts/auxtel). I was looking around (e.g. ts_standardscripts) and could not find a similar case, so not much in favor of creating a data folder inside the main scripts folder. Maybe it could live in the lsst-ts/ts_config_ocs, where the scheduler/survey config files live.
"fs_exp_time": 1, | ||
"fs_n_exp": 1, | ||
"em_exp_time": 5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't fs_exp_time
and em_exp_time
values be equal? If so, maybe we should check this somewhere when a custom sequence is provide. Maybe not now, as we still don't know what the input parameters will look like, but in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the electrometer and fiber spectrographs have different sensitivities, especially as a function of wavelength. This means that each system receives different count rates and basically saturates at different times. So for a given flat, which for example sake is 30s long, it's possible that we'll take ten 3s exposures with the fiber spectrograph, and 3 10s scans with the electrometer.
# valid sequence, using the default | ||
|
||
async with self.make_script(): | ||
# Try configure with invalid sequence data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this comment valid here?
# indication simulation for s3 bucket. | ||
await self.script.arun(simulation_mode=1) | ||
|
||
async def test_sequence_custom(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found this method a little bit redundant with the test_configure
method, as a substantial portion of it is repeats the configure test executed there. Wouldn't it be better to have just one test case then?
I understand that the idea is to have isolated and independent methods, but at same time, we usually don’t like to repeat code, so I’m not really sure on what to do here, but wanted to raise that question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair comment. So there is overlap. I think what happened is that when developing this I started with the test_configure bit which is very fast to run then wrote the second which actually executes the script, so it's much longer.
I suppose the proper thing is to remove the bit in the test_configure method.
Mocks the taking of fiber spectrograph images | ||
""" | ||
|
||
# FIXME TO SUPPORT MULTIPLE IMAGES |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to say that I'm curious on how to implement this :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the real answer is to create a high-level controller for this CSC that takes the multiple images, then just mock that 😃 This is the type of thing that will get ripped out of this script in the future.
async def test_executable(self): | ||
scripts_dir = externalscripts.get_scripts_dir() | ||
script_path = scripts_dir / "auxtel" / "latiss_take_flats.py" | ||
logger.debug(f"Checking for script in {script_path}") | ||
await self.check_executable(script_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm running all the test cases in my docker container and all but this one (test_executable
) are passing. First, I thought it would be related to the scripts/auxtel/latiss_take_flats.py
since it was not configured correctly. Then, I fixed it as suggested in the corresponding review comment. However, the failure did not disappear, raising a TimeoutError
as shown below. Interestingly, Jenkins says that the test_executable
method was "Fixed", so I assume it was failing at some point but then it got fixed. I just don’t understand why it fails when running inside the container.
======================================================================
ERROR: test_executable (test_latiss_take_flats.TestLatissTakeFlats)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/lsst/software/stack/conda/envs/lsst-scipipe-5.1.0/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/lsst/software/stack/conda/envs/lsst-scipipe-5.1.0/lib/python3.10/unittest/async_case.py", line 64, in _callTestMethod
self._callMaybeAsync(method)
File "/opt/lsst/software/stack/conda/envs/lsst-scipipe-5.1.0/lib/python3.10/unittest/async_case.py", line 87, in _callMaybeAsync
return self._asyncioTestLoop.run_until_complete(fut)
File "/opt/lsst/software/stack/conda/envs/lsst-scipipe-5.1.0/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/opt/lsst/software/stack/conda/envs/lsst-scipipe-5.1.0/lib/python3.10/unittest/async_case.py", line 101, in _asyncioLoopRunner
ret = await awaitable
File "/opt/lsst/tssw/ts_externalscripts/tests/auxtel/test_latiss_take_flats.py", line 261, in test_executable
await self.check_executable(script_path)
File "/opt/lsst/tssw/ts_standardscripts/python/lsst/ts/standardscripts/base_script_test_case.py", line 123, in check_executable
state = await remote.evt_state.next(flush=False, timeout=MAKE_TIMEOUT)
File "/opt/lsst/tssw/ts_salobj/python/lsst/ts/salobj/topics/read_topic.py", line 657, in next
return await self._next(timeout=timeout)
File "/opt/lsst/tssw/ts_salobj/python/lsst/ts/salobj/topics/read_topic.py", line 669, in _next
return await asyncio.wait_for(self._next_task, timeout=timeout)
File "/opt/lsst/software/stack/conda/envs/lsst-scipipe-5.1.0/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
----------------------------------------------------------------------
Ran 1 test in 155.642s
FAILED (errors=1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hard to tell what the issue is from that. Did you run the test in debug mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. I did not. but I'm having same issue when running other test cases (e.g, test_correct_pointing.pyP) . So, I'll try to understand what is happening in my container.
# Prepare summary table for writing | ||
await self.prepare_summary_table() | ||
|
||
for i, self.step in enumerate(self.sequence): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if the script fails in the middle of the sequence? Should the script be run from the beginning again or maybe should it be run with an updated schema (in case a custom sequence is provided) that will take that into account?
If a default sequence is provided, then I guess that the solution would be "run everything again", right? Can somehow the summary_table
be used to keep track of what was already done, so we can recover the script from that point?
7f4d0cf
to
7cdd000
Compare
… the atcalsys class.
Add sequence option to the configuration to pass more complex sequences.
…mation is preserved
…se integers for the time being. Also modify config to support custom sequences.
a5edde2
to
9cd2eee
Compare
No description provided.