Jump to content
TUFLOW Forum

RhysHJ

Members
  • Content Count

    38
  • Joined

  • Last visited

Community Reputation

0 Neutral

About RhysHJ

  • Rank
    Advanced Member

Recent Profile Visitors

1281 profile views
  1. RhysHJ

    2d_lp in HPC

    Hello, I'm trying to use 2d_lp output for a small HPC test model. The CSVs are being written but I'm just getting a whole lotta zeros (sample below). The cells are definitely wet with significant depth and velocity. Is this feature not supported for the HPC solver? Thanks Rhys
  2. Paul, Felix Taaffe from our office investigated this issue in detail for his undergraduate thesis, including the effects of varying grid resolution. The attached paper was presented by Mark Babister on his behalf at IAHR in Brisbane last year. As you identify, the DEM cannot represent the small-scale drainage features in the upper catchment, leading to artificial extra storage being created. I think rather than applying negative loss values, methods to modify the DEM either by smoothing, filling or pre-wetting are probably the way to go. If filling or pre-wetting pre-wetting, you might need to consider whether there are legitimate trapped depression storages in your catchment, and whether it is an appropriate antecedent condition to have them brim-full before your design storm burst. Cheers Rhys IAHR2093.PDF
  3. Jon, With those computer specifications, I suspect there would be very little improvement in investing in superior hardware (unless you have access to a Cray). If your model is extremely large some more RAM might help but not many models would exceed a few Gb. The GPU will not affect run times at all. The key parameters of your model run-time are generally the number of wet grid cells and the number of time steps, and the most effective way to reduce your run time would be to look at these. Typically if you double your grid size (and double your timestep) you can expect around a factor of 8 (2^3) reduction in run-time. If you can't change your cell size, you might look at the length of your simulation - e.g. can you use a restart file or "hot start", or can you stop the simulation just after the peak of your event? You might also look for locations of potential instability where the schematisation can be improved, allowing you to run the model at higher timesteps. Another thing to check is how often you are writing 2D results to disk. If this is very frequent it can slow the simulation considerably. Cheers Rhys
  4. Giuliano, The direct rainfall method has a lot of hurdles, including the instabilities area of shallow depth/steep terrain that you are encountering. Other problems can occur later with poor delineation of drainage features like gutters (even with good quality LIDAR), causing problems with drainage connectivity and poor modeling of the concentration phase of runoff to each pit. I would suggest that your original approach of estimating hydrologic flows to each inlet pit separately, and applying them directly at the inlet would be a more robust method. The additional work required to is often similar to the additional work to ensure your direct rainfall model is "healthy." I'm not familiar with Windes but if there is some way to export your catchment data/flows and get it into a spreadsheet, there isn't much more work required. That's just my opinion of course. Rhys
  5. Sorry just read your post properly and realised the main issue you are having is small velocities in trivial depth areas of the map from a rainfall on grid approach. You might want to experiment with the "Map Cutoff Depth" and "Maximum Velocity Cutoff Depth" commands for the .tcf file. Obviously these would have to be included before the simulation is run, not just post-processing.
  6. Mathieu, This always seems to take a bit of experimentation with each project to get something that looks right, as the choice of scale factor for a good-looking map generally depends on velocity magnitude and grid cell size. I often find it useful to use the -grid option to space the vectors a bit further apart, as with the default you get vectors for every computation cell and they get a bit crowded. Try using -grid with 2 or 3 times your cell size. This will allow use of a bigger scale factor so that the main flowpaths show up more. Another tip you might use is to remove fill shading from the arrow polygons in your GIS environment (just keep the borders). This can remove clutter and allow the colours of your depth map/aerial photo to show through. Cheers Rhys
  7. Paul, I am a big fan of the embedded design storm approach for this kind of problem. Essentially, you embed your design peak burst for the critical duration (4.5 hrs) in a longer design storm (say 12 hours). The highest intensity 4.5 hour period of the 12 hour storm is replaced by the design 4.5 hour storm, then the volume of the start and end of the longer storm are reduced to bring the total volume back to the design ARI level. This approach introduces antecedent rainfall that can be more representative of real storms than a standard burst approach. While not as rigorous as a monte-carlo or continuous simulation, I think it is a good extension of design burst hydrology that can provide insight into the kind of problem you are investigating. Unfortunately I can't find the relevant papers to attach to this post. However, Ted Rigby was one of the people who published on the topic and the approach has been coded into the WBNM model. Just be careful that some of the ARR burst patterns are quite front-loaded (maybe the 9-hr from memory) so if this pattern is used as the longer part of the storm, you may not get much antecedent rainfall as your 4.5hr burst will be inserted right at the beginning of the longer storm. Hope that helps Rhys
  8. Sorry, my bad. Failed to read the instructions at the end of the post.
  9. Hi Phillip, Thanks, been looking forward to this for a while. However trying to run the program, I get an error message: "This application has failed to start because xmdf1.8dll.dll was not found. Re-installing the application may fix this problem" If I run from the command line there is no error message, but the program does not return anything. Is there a dependency file that needs to be placed with the .exe? Cheers Rhys
  10. Tom, Is the tributary an important part of the study area? That is, do you actually need hydraulic modelling results for the tributary? It sounds like the amount of flow is relatively trivial compared to your mainstream discharge. Arguably, given you have not delineated the channel and it is not well defined by your survey, results in this area would be unreliable anyway. Perhaps you could think about removing the tributary from your hydraulic model domain, calculate the tributary flow hydrograph at the main channel using a runoff routing model, and apply this hydrograph as an inflow directly to the main channel at the confluence. If you really are interested in flood levels around the tributary, a separate more detailed model might be in order. Cheers Rhys
  11. RhysHJ

    1d Junctions

    Rush, You need an "X" channel to connect the tributary. This is just a dummy channel in your 1d_nwk layer that connects the tributary to the main channel (the direction of digitisation is important - see the manual for details), and has an "X" in the channel type attribute. This has always been the recommended setup but I think you could get away with not doing it until the most version. Cheers Rhys
  12. Rusty's reply is a good summary of the main situations where double precision might become important (such as for high elevations or when small increments of depth might be occurring each timestep, or across cells) and why. As to why your model runs so much slower with double precision, it might be RAM related. Double precision models have much higher memory requirements and if you don't have sufficient physical RAM your system will do a lot more paging to disk, which will significantly slow things down. Running a 32-bit system or multiple simulations at once on a multi-core PC might also have implications in this regard. As to which version to use, your approach is fine. If you verify there is no significant results change with the double precision version then stick to single precision for that model. Cheers Rhys
  13. I would also suggest that a reasonable bathymetric model topography is likely to be of critical importance, particularly if you are using tidal forcing for a location 5km up the estuary. At the very the collection of cross-section survey would be advisable to give you an idea of the bathymetry at key locations, and intermediate bathymetry can be guestimated from there. As with any model, locating calibration data relevant to the purpose for which the model is being established should be a high priority. Grid size and schematisation will depend on the purpose of your model and practical considerations like model run-time. Kind Regards Rhys
  14. This post may also help: http://www.tuflow.com/forum/index.php?showtopic=639 Cheers Rhys
  15. You probably need some drivers from Microsoft's Visual C++ 2008 Redistributable Package, which are required for the more recent TUFLOW compilations with the Intel compiler. See the post at this link for details: http://www.tuflow.com/forum/index.php?showtopic=639
×
×
  • Create New...