Skip to content

Commit 01e109c

Browse files
kecnryaprsa
andauthored
2.4.13 bugfix release (#838)
* Import from within source tree overly restrictive (#819) In order to avoid same-name module conflicts with the system module (such as io.py), phoebe shouldn't be imported from its own source tree. The original test eliminated the entire package tree, including all auxiliary directories, which is overly restrictive and it prevented pytest from running when phoebe was installed with `pip -e`. This PR fixes that by blocking imports from the root directory and from the root/phoebe tree (the actual sources). Closes #806. * update tests for changes to pytest (#820) * Fixing up all pytests The new pytest version raises an error instead of warning when tests return a value, and when assert is used incorrectly. All tests have been touched up to not have issues with the latest pytest. * Update f-string to be compatible with python 3.7 f-string format f'{var=}' has been introduced in python 3.8, and it caused the 3.7 tests to fail. This fixes that issue. --------- Co-authored-by: Andrej Prsa <[email protected]> * Dynamical RVs now avoid meshing (#823) * Dynamical RVs now avoid meshing Calling b.compute_pblums() built the mesh and treated dynamical RVs as mesh-dependent. This fixes that for both run_compute() and for direct compute_pblums(), compute_l3(), and compute_ld_coeffs bundle methods. A new function, b._datasets_that_require_meshing(), filters datasets that require a mesh, i.e. 'lc', 'lp' and flux-weighted 'rv'. Closes #812. * Generalized dataset filtering with b._datasets_where() * Adding backend exception to mesh computation Even if phoebe backend didn't need meshes for a particular dataset, other backends do. This is now handled in the `_datasets_where()` function. --------- Co-authored-by: Kyle Conroy <[email protected]> * run_checks_compute() bugfix (#837) * run_checks_compute() bugfix The run_checks_compute() function included the internal "_default" compute parameter set in checking the ck2004 model atmosphere table existence in the passband files. This caused the checks to fail even if ck2004 was never needed. * Incorporating @kecnry's comments into the PR (along with additional comments) * Fixing the if statement per @kecnry's instructions. * Fix treatment of distance for alternate backends (#832) * regression test * don't duplicate distance handling for alt backends * temporarily skip jktebop * use top-level default_binary in test * version and changelog for 2.4.13 release * update setup-python to v5 to avoid deprecation warning --------- Co-authored-by: Andrej Prsa <[email protected]> Co-authored-by: Andrej Prsa <[email protected]>
1 parent 8833cbf commit 01e109c

File tree

53 files changed

+830
-856
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

53 files changed

+830
-856
lines changed

.github/workflows/on_pr.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ jobs:
2626
uses: actions/checkout@v4
2727

2828
- name: Setup python ${{ matrix.python-version }}
29-
uses: actions/setup-python@v4
29+
uses: actions/setup-python@v5
3030
with:
3131
python-version: ${{ matrix.python-version }}
3232

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,12 @@ To understand how to use PHOEBE, please consult the [tutorials, scripts and manu
8585
CHANGELOG
8686
----------
8787

88+
### 2.4.13
89+
90+
* optimization: dynamical RVs avoid unnecessary meshing
91+
* run_checks no longer requires ck2004 atmosphere tables if no datasets use ck2004
92+
* fix treatment of distance for alternate backends (ellc, jktebop)
93+
8894
### 2.4.12 - build system update
8995

9096
* upgrade the build system to pyproject.toml with setuptools as backend and pip as frontend.

phoebe/__init__.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
1818
"""
1919

20-
__version__ = '2.4.12'
20+
__version__ = '2.4.13'
2121

2222
import os as _os
2323
import sys as _sys
@@ -26,9 +26,9 @@
2626
import atexit
2727
import re
2828

29-
# People shouldn't import Phoebe from the installation directory (inspired upon
30-
# pymc warning message).
31-
if _os.getcwd().find(_os.path.abspath(_os.path.split(_os.path.split(__file__)[0])[0]))>-1:
29+
# People shouldn't import phoebe from the root directory or from root/phoebe:
30+
_root_dir = _os.path.abspath(_os.path.split(_os.path.split(__file__)[0])[0])
31+
if _os.getcwd() == _root_dir or _os.path.join(_root_dir, 'phoebe') in _os.getcwd():
3232
# We have a clash of package name with the standard library: we implement an
3333
# "io" module and also they do. This means that you can import Phoebe from its
3434
# main source tree; then there is no difference between io from here and io

phoebe/frontend/bundle.py

Lines changed: 81 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -3765,7 +3765,7 @@ def run_checks_compute(self, compute=None, solver=None, solution=None, figure=No
37653765

37663766
pb_needs_Inorm = True
37673767
pb_needs_Imu = True
3768-
pb_needs_ld = True #np.any([p.get_value()!='interp' for p in self.filter(qualifier='ld_mode', dataset=pbparam.dataset, context='dataset', **_skip_filter_checks).to_list()])
3768+
pb_needs_ld = True
37693769
pb_needs_ldint = True
37703770

37713771
missing_pb_content = []
@@ -3780,8 +3780,10 @@ def run_checks_compute(self, compute=None, solver=None, solution=None, figure=No
37803780
True, 'run_compute')
37813781

37823782
# NOTE: atms are not attached to datasets, but per-compute and per-component
3783-
for atmparam in self.filter(qualifier='atm', kind='phoebe', compute=computes, **_skip_filter_checks).to_list() + self.filter(qualifier='ld_coeffs_source').to_list():
3784-
3783+
# NOTE: atmparam includes a '_default' compute pset, which depends on the
3784+
# ck2004 atmospheres; the checks should not include it. This is achieved
3785+
# by filtering check_visible=False and check_default=True in the line below:
3786+
for atmparam in self.filter(qualifier='atm', kind='phoebe', compute=computes, check_visible=False, check_default=True).to_list() + self.filter(qualifier='ld_coeffs_source').to_list():
37853787
# check to make sure passband supports the selected atm
37863788
atm = atmparam.get_value(**_skip_filter_checks)
37873789
if atmparam.qualifier == 'ld_coeffs_source' and atm == 'auto':
@@ -10199,7 +10201,6 @@ def ui_figures(self, web_client=None, blocking=None):
1019910201

1020010202
return self._launch_ui(web_client, 'figures', blocking=blocking)
1020110203

10202-
1020310204
def compute_ld_coeffs(self, compute=None, set_value=False, **kwargs):
1020410205
"""
1020510206
Compute the interpolated limb darkening coefficients.
@@ -10246,19 +10247,25 @@ def compute_ld_coeffs(self, compute=None, set_value=False, **kwargs):
1024610247
appropriate length given the respective value of `ld_func`).
1024710248
"""
1024810249

10250+
# check to make sure value of passed compute is valid
1024910251
if compute is None:
10250-
if len(self.computes)==1:
10252+
if len(self.computes) == 1:
1025110253
compute = self.computes[0]
1025210254
else:
1025310255
raise ValueError("must provide compute")
1025410256
if not isinstance(compute, str):
1025510257
raise TypeError("compute must be a single value (string)")
1025610258

10257-
compute_ps = self.get_compute(compute, **_skip_filter_checks)
10259+
datasets = kwargs.pop('dataset') if 'dataset' in kwargs else self._datasets_where(compute=compute, mesh_needed=True)
10260+
1025810261
# we'll add 'bol' to the list of default datasets... but only if bolometric is needed for irradiation
10262+
compute_ps = self.get_compute(compute, **_skip_filter_checks)
1025910263
needs_bol = compute_ps.get_value(qualifier='irrad_method', irrad_method=kwargs.get('irrad_method', None), default='none', **_skip_filter_checks) != 'none'
10264+
if needs_bol:
10265+
datasets += ['bol']
1026010266

10261-
datasets = kwargs.pop('dataset', self.datasets + ['bol'] if needs_bol else self.datasets)
10267+
if len(datasets) == 0:
10268+
return {}
1026210269
components = kwargs.pop('component', self.components)
1026310270

1026410271
# don't allow things like model='mymodel', etc
@@ -10383,6 +10390,27 @@ def restore_conf():
1038310390

1038410391
return system
1038510392

10393+
def _datasets_where(self, compute, mesh_needed=False, l3_needed=False):
10394+
datasets = self.filter(compute=compute, context='compute', qualifier='enabled', value=True, **_skip_filter_checks).datasets
10395+
ds_kinds = [self.filter(dataset=ds, context='dataset', **_skip_filter_checks).kind for ds in datasets]
10396+
backend = self.filter(compute=compute, context='compute', **_skip_filter_checks).kind
10397+
10398+
subset = []
10399+
10400+
if l3_needed:
10401+
subset += [ds for ds in datasets if len(self.filter(qualifier='l3_mode', dataset=ds, context='dataset', check_visible=True)) > 0]
10402+
10403+
if mesh_needed:
10404+
subset += [ds for ds, kind in zip(datasets, ds_kinds)
10405+
if kind == 'lc'
10406+
or kind == 'lp'
10407+
or (kind == 'rv' and backend != 'phoebe')
10408+
or (kind == 'rv' and len(self.filter(qualifier='rv_method', dataset=ds, compute=compute, value='flux-weighted', **_skip_filter_checks)) > 0)
10409+
]
10410+
10411+
# subset can have repeated entries; return unique occurrences:
10412+
return list(set(subset))
10413+
1038610414
def compute_l3s(self, compute=None, use_pbfluxes={},
1038710415
set_value=False, **kwargs):
1038810416
"""
@@ -10426,12 +10454,6 @@ def compute_l3s(self, compute=None, use_pbfluxes={},
1042610454
"""
1042710455
logger.debug("b.compute_l3s")
1042810456

10429-
datasets = kwargs.pop('dataset', self.filter('l3_mode', check_visible=True).datasets)
10430-
if isinstance(datasets, str):
10431-
datasets = [datasets]
10432-
10433-
10434-
1043510457
if compute is None:
1043610458
if len(self.computes)==1:
1043710459
compute = self.computes[0]
@@ -10440,6 +10462,12 @@ def compute_l3s(self, compute=None, use_pbfluxes={},
1044010462
if not isinstance(compute, str):
1044110463
raise TypeError("compute must be a single value (string)")
1044210464

10465+
# either take user-passed datasets or datasets that have an l3_mode:
10466+
datasets = kwargs.pop('dataset') if 'dataset' in kwargs else self._datasets_where(compute=compute, l3_needed=True)
10467+
if isinstance(datasets, str):
10468+
datasets = [datasets]
10469+
10470+
# make sure all parameters are up to date:
1044310471
self.run_delayed_constraints()
1044410472

1044510473
datasets_need_pbflux = [d for d in datasets if d not in use_pbfluxes.keys()]
@@ -10457,10 +10485,9 @@ def compute_l3s(self, compute=None, use_pbfluxes={},
1045710485
**kwargs)
1045810486

1045910487
# don't allow things like model='mymodel', etc
10460-
if not kwargs.get('skip_checks', False):
10461-
forbidden_keys = parameters._meta_fields_filter
10462-
compute_ps = self.get_compute(compute, **_skip_filter_checks)
10463-
self._kwargs_checks(kwargs, additional_allowed_keys=['system', 'skip_checks', 'ret_structured_dicts', 'pblum_method']+compute_ps.qualifiers, additional_forbidden_keys=forbidden_keys)
10488+
forbidden_keys = parameters._meta_fields_filter
10489+
compute_ps = self.get_compute(compute, **_skip_filter_checks)
10490+
self._kwargs_checks(kwargs, additional_allowed_keys=['system', 'skip_checks', 'ret_structured_dicts', 'pblum_method']+compute_ps.qualifiers, additional_forbidden_keys=forbidden_keys)
1046410491

1046510492
ret_structured_dicts = kwargs.get('ret_structured_dicts', False)
1046610493
l3s = {}
@@ -10620,7 +10647,26 @@ def compute_pblums(self, compute=None, model=None, pblum=True, pblum_abs=False,
1062010647
"""
1062110648
logger.debug("b.compute_pblums")
1062210649

10623-
datasets = kwargs.pop('dataset', self.filter(qualifier='passband').datasets)
10650+
# check to make sure value of passed compute is valid
10651+
if compute is None:
10652+
if len(self.computes) == 1:
10653+
compute = self.computes[0]
10654+
else:
10655+
raise ValueError("must provide compute")
10656+
if not isinstance(compute, str):
10657+
raise TypeError("compute must be a single value (string)")
10658+
10659+
compute_ps = self.get_compute(compute=compute, **_skip_filter_checks)
10660+
ret_structured_dicts = kwargs.get('ret_structured_dicts', False)
10661+
10662+
# either take user-passed datasets or datasets that require a mesh:
10663+
datasets = kwargs.pop('dataset') if 'dataset' in kwargs else self._datasets_where(compute=compute, mesh_needed=True)
10664+
10665+
if len(datasets) == 0:
10666+
if ret_structured_dicts:
10667+
return None, {}, {}, {}, {}
10668+
return {}
10669+
1062410670
if isinstance(datasets, str):
1062510671
datasets = [datasets]
1062610672

@@ -10638,16 +10684,6 @@ def compute_pblums(self, compute=None, model=None, pblum=True, pblum_abs=False,
1063810684
else:
1063910685
components = valid_components
1064010686

10641-
# check to make sure value of passed compute is valid
10642-
if compute is None:
10643-
if len(self.computes)==1:
10644-
compute = self.computes[0]
10645-
else:
10646-
raise ValueError("must provide compute")
10647-
if not isinstance(compute, str):
10648-
raise TypeError("compute must be a single value (string)")
10649-
10650-
compute_ps = self.get_compute(compute=compute, **_skip_filter_checks)
1065110687
# NOTE: this is flipped so that stefan-boltzmann can manually be used even if the compute-options have kind='phoebe' and don't have that choice
1065210688
pblum_method = kwargs.pop('pblum_method', compute_ps.get_value(qualifier='pblum_method', default='phoebe', **_skip_filter_checks))
1065310689
t0 = self.get_value(qualifier='t0', context='system', unit=u.d, t0=kwargs.pop('t0', None), **_skip_filter_checks)
@@ -10716,7 +10752,6 @@ def compute_pblums(self, compute=None, model=None, pblum=True, pblum_abs=False,
1071610752
else:
1071710753
raise ValueError("pblum_method='{}' not supported".format(pblum_method))
1071810754

10719-
ret_structured_dicts = kwargs.get('ret_structured_dicts', False)
1072010755
ret = {}
1072110756

1072210757
# pblum_*: {dataset: {component: value}}
@@ -11666,6 +11701,7 @@ def run_compute(self, compute=None, model=None, solver=None,
1166611701
* ValueError: if any given dataset is enabled in more than one set of
1166711702
compute options sent to run_compute.
1166811703
"""
11704+
1166911705
# NOTE: if we're already in client mode, we'll never get here in the client
1167011706
# there detach is handled slightly differently (see parameters._send_if_client)
1167111707
if isinstance(detach, str):
@@ -11713,6 +11749,7 @@ def run_compute(self, compute=None, model=None, solver=None,
1171311749
# NOTE: _prepare_compute calls run_checks_compute and will handle raising
1171411750
# any necessary errors
1171511751
model, computes, datasets, do_create_fig_params, changed_params, overwrite_ps, kwargs = self._prepare_compute(compute, model, dataset, from_export=False, **kwargs)
11752+
1171611753
_ = kwargs.pop('do_create_fig_params', None)
1171711754

1171811755
if use_server is None:
@@ -11912,8 +11949,7 @@ def restore_conf():
1191211949
# TODO: have this return a dictionary like pblums/l3s that we can pass on to the backend?
1191311950

1191411951
# we need to check both for enabled but also passed via dataset kwarg
11915-
ds_kinds_enabled = self.filter(dataset=dataset_this_compute, context='dataset', **_skip_filter_checks).kinds
11916-
if 'lc' in ds_kinds_enabled or 'rv' in ds_kinds_enabled or 'lp' in ds_kinds_enabled:
11952+
if len(self._datasets_where(compute=compute, mesh_needed=True)) > 0:
1191711953
logger.info("run_compute: computing necessary ld_coeffs, pblums, l3s")
1191811954
self.compute_ld_coeffs(compute=compute, skip_checks=True, set_value=True, **{k:v for k,v in kwargs.items() if k in computeparams.qualifiers})
1191911955
# NOTE that if pblum_method != 'phoebe', then system will be None
@@ -12090,20 +12126,22 @@ def _scale_fluxes_cfit(fluxes, scale_factor):
1209012126
logger.debug("applying scale_factor={} to {} parameter in mesh".format(scale_factor, mesh_param.qualifier))
1209112127
mesh_param.set_value(mesh_param.get_value()*scale_factor, ignore_readonly=True)
1209212128

12093-
# handle flux scaling based on distance and l3
12094-
# NOTE: this must happen AFTER dataset scaling
12095-
distance = self.get_value(qualifier='distance', context='system', unit=u.m, **_skip_filter_checks)
12096-
for flux_param in ml_params.filter(qualifier='fluxes', kind='lc', **_skip_filter_checks).to_list():
12097-
dataset = flux_param.dataset
12098-
if dataset in datasets_dsscaled:
12099-
# then we already handle the scaling (including l3)
12100-
# above in dataset-scaling
12101-
continue
12129+
# alternate backends other than legacy already account for distance via pbflux
12130+
if computeparams.kind in ['phoebe', 'legacy']:
12131+
# handle flux scaling based on distance and l3
12132+
# NOTE: this must happen AFTER dataset scaling
12133+
distance = self.get_value(qualifier='distance', context='system', unit=u.m, **_skip_filter_checks)
12134+
for flux_param in ml_params.filter(qualifier='fluxes', kind='lc', **_skip_filter_checks).to_list():
12135+
dataset = flux_param.dataset
12136+
if dataset in datasets_dsscaled:
12137+
# then we already handle the scaling (including l3)
12138+
# above in dataset-scaling
12139+
continue
1210212140

12103-
fluxes = flux_param.get_value(unit=u.W/u.m**2)
12104-
fluxes = fluxes/distance**2 + l3s.get(dataset)
12141+
fluxes = flux_param.get_value(unit=u.W/u.m**2)
12142+
fluxes = fluxes/distance**2 + l3s.get(dataset)
1210512143

12106-
flux_param.set_value(fluxes, ignore_readonly=True)
12144+
flux_param.set_value(fluxes, ignore_readonly=True)
1210712145

1210812146
# handle vgamma and rv_offset
1210912147
vgamma = self.get_value(qualifier='vgamma', context='system', unit=u.km/u.s, **_skip_filter_checks)

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111

1212
[project]
1313
name = "phoebe"
14-
version = "2.4.12"
14+
version = "2.4.13"
1515
description = "PHOEBE: modeling and analysis of eclipsing binary stars"
1616
readme = "README.md"
1717
requires-python = ">=3.7"

0 commit comments

Comments
 (0)