Jump to content
TUFLOW Forum

Search the Community

Showing results for tags 'hpc'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • About This Forum and Announcements
    • How to Use This Forum
    • Forum Feedback
    • Announcements
  • TUFLOW Modelling
    • 1D/2D Linking
    • 1D Domains
    • 2D/2D Linking
    • 2D/2D Nesting
    • 2D Domains
    • Boundaries
    • Documentation & Tutorial Model
    • Dongles/Licensing/Installation
    • Ideas / Suggestions / New Features
    • Mass Balance/Mass Error
    • MATH Errors & Simulation Failure
    • Restart Files
    • Post-Processing
    • Software/Hardware Requirements
    • Text Files (.tcf, .tgc, .tbc, .ecf)
    • Utilities
    • Miscellaneous
  • Other Software
    • ISIS-TUFLOW
    • MapInfo/Vertical Mapper
    • miTools
    • Other GIS/CAD
    • SMS
    • XP-SWMM2D
    • UltraEdit/Excel
    • TUFLOW Apps

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 9 results

  1. I am running a Flood Modeller-TUFLOW model using FloMo v 4.5. The Flood Modeller part runs just as if I were using the CPU, but there are no wet cells in TUFLOW at all, when there were plenty when running it with CPU. I checked the supplementary results and there appears to be no flow from Flood Modeller into the TUFLOW domain. A TUFLOW log file is output and TUFLOW is running, yet reports zero wet cells. Do I need to do anything in addition to calling the HPC solver to get Flood Modeller to talk to TUFLOW?
  2. I am currently testing existing regional flood models using the HPC solver. Most of these models have source inflows applied using traditional SA inflows. While switching to the HPC solver in the control files is easy - overcoming the limit applied to source inflows on individual cells (4 source inflows in total) becomes problematic, especially when the messages layer in the log files does not show specifically where there are too many inflows applied. Can anyone suggest a quick workaround?
  3. Hi, We are running some TUFLOW models using the GPU hardware and HPC solution scheme (Build 2017-09-AC). We found that some of the runs fail without warning (well into the simulation). It seems that they are 'exiting without prompt', so we are unable to view the dos window to see what has caused the models to fail. We have tried disabling the 'quick edit mode' in the dos window in case the issue was related to the 'TUFLOW pause mid simulation? cause and solution' topic posted by Chris Huxley, but this didn't make any difference. Three runs (15, 25, and 540 minute) completed without any apparent issues. Two runs (90 and 120 minute) failed, but when re-started they completed successfully. However three longer runs (720, 2400 and 2880 minute) failed and will not complete on re-start. We have reviewed the .tlf files (both standard .tlf and hpc.tlf). We appreciate that the adaptive timestep will mask potential instabilities, but there is nothing in the tlf files to indicate that TUFLOW is having instability issues (i.e. the timesteps are consistent at the time of failure). We would appreciate some guidance on how to resolve this issue. We are managing the runs via TRIM. Kind regards, Francis Lane
  4. laomashu

    HPC run

    Hi, I have developed a rain on grid model and run it on a PC with GeForce GTX 780 Ti, driver version 372.90, the simulation was crash due to unstable. However the run was fine on other PC which has GeForce GTX Titan Black, driver version 372.90. I just wondering whether the graphic card has impact on the simulation. Regards hai Chen
  5. Hi Admin, Do you have an example of model text files incorporating .tcf HPC, 1d (.ecf), IL (materials), direct rainfall? Thanks in advance
  6. Question: When running a direct rainfall model using the TUFLOW HPC solver and applying an IL/CL using the materials layer, the results in the RFML map output are showing that the losses continue to accumulate after the rainfall has stopped i.e. for a 1 hour storm event, the cumulative loss continues to increase after the model time has passed the 1 hour mark. Is the model working, and am I able to use the other output results? Answer: Yes, the model is working and you can still use TUFLOW HPC to model direct rainfall with IL/CL applied using the materials layer. There is a bug in TUFLOW build 2017-09-AC in the RFML map output when running with the HPC solver that continues to report rainfall loss after the rainfall has ceased. This bug is confined to this map output only and does not affect the hydraulic computations or other outputs (depth, velocity, hazard etc). Please do not use the RFML output when using the HPC solver in TUFLOW build 2017-09-AC or earlier. This is not an issue when using the Classic solver.
  7. I am currently testing existing regional flood models using the HPC solver. Most of these models have source inflows applied using traditional SA inflows. While switching to the HPC solver in the control files is easy - overcoming the limit applied to source inflows on individual cells (4 source inflows in total) becomes problematic, especially when the messages layer in the log files does not show specifically where there are too many inflows applied. Can anyone suggest a quick workaround?
  8. Hi! I'm having some trouble getting it to rain in a model I've converted to HPC. It worked fine in classic, but when I add the lines below to the TCF, no water enters the model... Solution Scheme == HPC Hardware == GPU in the TLF it shows that it's still reading the hyetographs correctly, but for some reason the Vi column is full of zeroes. Any ideas? I thought maybe it didn't like the Global Rainfall BC command & tried it with a 2D_RF polygon but no luck. Similarly, I tried changing the PO layer to one without QS lines but still no luck not sure if it's related but the DOS window also contains a bunch of errors along the lines of "CUDA driver API error 0400". They occur all the way through but the run doesn't stop please help! Sam
  9. Hi! I'm having some trouble getting it to rain in a model I've converted to HPC. It worked fine in classic, but when I add the lines below to the TCF, no water enters the model... Solution Scheme == HPC Hardware == GPU in the TLF it shows that it's still reading the hyetographs correctly, but for some reason the Vi column is full of zeroes. Any ideas? I thought maybe it didn't like the Global Rainfall BC command & tried it with a 2D_RF polygon but no luck. Similarly, I tried changing the PO layer to one without QS lines but still no luck not sure if it's related but the DOS window also contains a bunch of errors along the lines of "CUDA driver API error 0400". They occur all the way through but the run doesn't stop please help! Sam Gin_09_05PC_2160m_post_HPC.tlf Gin_09_05PC_2160m_post_HPC.hpc.tlf
×
×
  • Create New...