Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python int too large to convert to C long #5454

Open
jpgallegoar opened this issue Dec 18, 2024 · 0 comments
Open

Python int too large to convert to C long #5454

jpgallegoar opened this issue Dec 18, 2024 · 0 comments

Comments

@jpgallegoar
Copy link

jpgallegoar commented Dec 18, 2024

Describe the bug

I got this error when choosing a too large resolution in Hunyuan Video model. Is there a way to fix it?

!!! Exception during processing !!! Python int too large to convert to C long
Traceback (most recent call last):
  File "E:\AI\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\AI\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\AI\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "E:\AI\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "E:\AI\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 1140, in process
    out_latents = model["pipe"](
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "E:\AI\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\diffusion\pipelines\pipeline_hunyuan_video.py", line 576, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\AI\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 775, in forward
    x = block(*single_block_args)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_dynamo\eval_frame.py", line 465, in _fn
    return fn(*args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\AI\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 401, in forward
    attn = attention(
  File "E:\AI\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 401, in torch_dynamo_resume_in_forward_at_401
    attn = attention(
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_dynamo\eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_functorch\aot_autograd.py", line 1100, in forward
    return compiled_fn(full_args)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 321, in runtime_wrapper
    all_outs = call_func_at_runtime_with_args(
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_functorch\_aot_autograd\utils.py", line 124, in call_func_at_runtime_with_args
    out = normalize_as_list(f(args))
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 667, in inner_fn
    outs = compiled_fn(args)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 488, in wrapper
    return compiled_fn(runtime_args)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\codecache.py", line 1478, in __call__
    return self.current_callable(inputs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\utils.py", line 1977, in run
    return model(new_inputs)
  File "C:\Users\thega\AppData\Local\Temp\torchinductor_thega\m4\cm4tanyectk3tyxslemljxlmhrsbzdffq4s23xkysepq7c65xipp.py", line 196, in call
    triton_poi_fused__scaled_mm__to_copy_ones_0.run(arg0_1, arg1_1, buf0, 2259164160, grid=grid(2259164160), stream=stream0)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 836, in run
    self.autotune_to_one_config(*args, grid=grid, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 729, in autotune_to_one_config
    timings = self.benchmark_all_configs(*args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 704, in benchmark_all_configs
    timings = {
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 705, in <dictcomp>
    launcher: self.bench(launcher, *args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 675, in bench
    return benchmarker.benchmark_gpu(kernel_call, rep=40, fast_flush=True)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\benchmarking.py", line 66, in wrapper
    return fn(self, *args, **kwargs)
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\benchmarking.py", line 201, in benchmark_gpu
    return self.triton_do_bench(_callable, **kwargs, return_mode="median")
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\triton\testing.py", line 106, in do_bench
    fn()
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 663, in kernel_call
    launcher(
  File "<string>", line 13, in launcher
  File "C:\Users\thega\miniconda3\envs\cogsage\lib\site-packages\triton\backends\nvidia\driver.py", line 408, in __call__
    self.launch(*args, **kwargs)
OverflowError: Python int too large to convert to C long

Environment details

Triton-windows 3.0.0 RTX 4090

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants