Jump to content
TUFLOW Forum

peteraylett

Members
  • Content Count

    191
  • Joined

  • Last visited

Community Reputation

0 Neutral

About peteraylett

  • Rank
    Advanced Member
  • Birthday September 1

Contact Methods

  • Website URL
    http://www.edenvaleyoung.com
  • ICQ
    0

Profile Information

  • Gender
    Male
  • Location
    Bristol, UK

Recent Profile Visitors

4604 profile views
  1. Hi Duck, I'll let Joe come back on your other questions, but specifically that error 1033 is relating to TUFLOW trying to interpolate the inverts of your channels. With channels, setting inverts to -99999 asks TUFLOW to interpolate the inverts at the end of your network lines using known invert elevations from further away in your network. I'm not sure why it's doing that for your bridge, since it should be just applying the lowest from your XZ table (or HW like you had before). For the one structure then, you could stop TUFLOW trying to interpolate anything by manually setting the inverts yourself instead of going with -99999. It may be that this only helps move TUFLOW on to a different step which might reveal some data issue elsewhere and explain the odd behaviour, so don't expect it to cure all. I might even be more worried if this fixes the thing, as it would mean bridges weren't being handled as expected! I've just twigged it's not only trying to interpolate inverts, but the whole set of properties, so what I wrote above doesn't apply. Suggests it's not recognising the cross section data being applied, though your command looks right. Sorry, not sure what to suggest! Hope this helps, Peter.
  2. You're quite right; so tidal profiles could be built up of astronomical profile and surge profile, for example. The place it isn't allowed is when dealing with HX boundaries (and by extension QT boundaries) and HQ boundaries where adding up the water levels makes no sence. From the reading of those release notes Phil, such tidal examples would no longer function though? They'd only apply the first HT they read in (a solution which I wholeheartedly approve of for HX and HQ though, and will make downstream ends of 1D/2D river models that bit easier). Could TUFLOW perhaps permit multiple HTs (or at least have the option to do so for legacy models), and only if any of the other flavours of H boundary are applied do the "first one wins" approach? Easy enough for the user to just change the input data though, I suppose. Also, as a thought, in just about every other walk of TUFLOW life, when conflicting inputs are applied in the same place it's the last that wins; is there a rational for diverging from that approach here? Many thanks, Peter.
  3. ...so the rectification is to check your digitisation of your H boundaries at this location (as indicated by your spatial message(s)) and re-digitise at least one of them so they aren't selecting the same cell. 😉 If you think there's a reason why there should be more than one HT being applied at the same place, do come back and there can be some discussion on that! As a rule though, it's not allowed, as you're instructing the model to apply two (presumably conflicting) water levels at the same place, and the model can only every have one water level in a cell. If they are always applying the same level and you've snapped them together to ensure continuity, consider making them into a single feature.
  4. Hi Josh, It sounds like it could be a mass balance issue in ESTRY. When you run on GPU, you're normally pushed into running single precision, and sometimes that's not quite good enough. This could be on one of those times. You can test for this easily enough but simply running you model with the double precision engine and see if it comes out different! You'll need to add the command "GPU DP Check == OFF" to run this on GPU. Another possibility is that you need to run with a smaller ESTRY timestep (which would again manifest as mass balance problems). If you've got small elements then your timestep needs to be smaller. Often ESTRY timesteps seem to be a neglected consideration, and set at 1 (default) and ignored. You may very well need to go smaller. Finally though, are you sure it's not correct? A small trickle slowly draining off such an area would accumulate through a pipe network into something non-negligible. I'd start by looking at your mass balance outputs and see if there's anything untoward in there. Let us know how you get on, I'm sure other folk are likely to run into similar and would like to know the outcome! PHA.
  5. The big question is, what is it that would govern the transfer of flow from the one place to another? Passing flows from a 2D cell to a 1D node would normally be done using an HX or SX boundary, and which you'd use would depend upon what you've got going on. An SX boundary can draw water from (or pass water into) an area, if that's appropriate. Probably transferring water from an area of 2D to another area of 2D would also be done via a 1D network of some sort (making the starting assumption that it's not a 2D flow route between the two or you wouldn't be asking!), but again what you'd actually use would depend what was really going on. I'm not sure the above is the most helpful answer I've ever given though; if you wanted to come back with a bit more detail I could try for something more specific! PHA PS. It's nice to see you on the forum again, it's been a while!
  6. Old thread alert! However, it seems a good place for the following... I wonder if we could have quite the opposite of what I first asked for way back when, and have an option to make the copied model much larger! This time, by looking in the place where model results would be being written, if the simulation was actually happening, and if there is anything there (with the right name) then copy that into the copy of the model. Just to make things easier to bundle up together for issue. It may be that this would fit much better with the -pm option, rather than -c, but I'm still a little wary of -pm as it's done the occasional funny thing when dealing with a model with lots of scenarios (while -c will always perform flawlessly). Alternatively, perhaps a completely separate utility that just goes and gets the results (I don't always want another copy of the model, I just want the results collated) ight be a good idea..? Thoughts very welcome! It may be that there's some neat and clever way of doing the above already? I normally just do it manually, but copying a few files from the main Results/ folder, then some from plot/, then csv/ and gis/, then the .flts from grids/ all gets a little tedious! So anything to speed up the process would be appreciated. Thanks! Peter.
  7. ...but your max and min will be at the start and end of your simulation, as the model only permits the groundwater to fill up at present.
  8. Hi Monika, TUFLOW doesn't do any datum transformations, so everything that you specify needs to be relative to a consistent datum of your choosing. So yes, your boundaries and bathy all need to be using the same datum, and the outputs will then also be relative to that same datum. I hope that helps, Peter.
  9. Dear all, What I'd like to be able to do is run my model with a single named scenario and have the file names come out with only that scenario, but have that scenario reference a bunch of other scenarios which want running in combination. Here's what I mean: I have a model which I've used scenarios to represent a whole pile of engineering options, scatter around the area; lets call these OptA, OptB, OptC, OptD, etc. Having tested them individually, some have been selected to try in combinations. combination 1 would be OptA, OptC and OptG, say, which combination 2 would be OptA, OptB and OptW. I'd like to be able to reference simply "tuflow.exe -s comb1 mySim_~s~.tcf", where the .tcf contains a command that says something like: If Scenario == comb1 Activate scenario == OptA ! Yes, I've made up this command for the purpose of the example Activate scenario == OptC Activate scenario == OptG End If If Scenario == OptA !Do some stuff ... End If etc such that it then processes the rest of the .tcf as if OptA, OptC and OptG have been called as scenarios, BUT the results are all going to be called only mySim_comb1.tlf, for example. Is this currently possible? My understanding is that scenarios can be set in the tcf but would be overwritten by the command flag -s (so couldn't be called by it!), and also would still turn up in the filenames. I don't think variables help..? I could just add "comb1" to any If Scenario == OptA statement, but it's messy and I might miss one somewhere; I'd rather just be able to tell it when I ask for comb1, also do OptA. If it's not currently possible, do you think it could be implemented please? It'd keep file names tidy for easier bulk processing of complex projects and help keep tcf If Scenario statements cleaner (which can get quite messy enough!). Thanks, Peter.
  10. Hi again, Ok, so you've fallen into one of TUFLOWs traps for the unwary/inexperienced. The specification required in the .csv is literally how far up the pipe you are against how wide it is; you don't tell TUFLOW where both sides are, just how wide it is for a bunch of heights. See the figure attached; there's a blue arrow (height) there for every red arrow (width) and that's all TUFLOW wants. Critically, this also needs to be in ascending order (which is what that error you referenced is checking for). I've taken your spreadsheet and done a quick conversion of it to the right format, albeit it's got a bit less detail than you started with where your elevations didn't pair up. Probably close enough for modelling! See other attachment. I hope that makes sense. Any questions, just come back again! All the best, Peter. PS. For reference, your data also ha the 'how far accross' data in the 'H' column, and the 'how far up' data in the 'W' column, which is backwards. Arch_culvert_2.xls
  11. Hi there! Do you still have your original .csv describing the shape with the curves at the base? I don't see why it wouldn't work, so I'd be curious to see what you were working with and whether I can figure it out. Peter.
  12. I'll say the boundary attributes look fine. I don't suppose your model is up in some hills/mountains somewhere? And have you perhaps trimmed your input DTM to just your modelled area? (This is relatively common practice, for all it has potential to cause problems for people!) If 'yes' for to both of those, it may simply be that the row of cells which your boundary is selecting was not covered by your DTM, and has been set to some arbitrary elevation. The slope boundary is causing the water level to be up somewhere "sensible" in relation to the water that's sitting on the DTM, and so you end up with your insensible depths. In which case the solution would be to either get your DTM data again so it covers that row of cells, or moved your boundary (and code polygon) in a row of cells to where you currently have good data. You could check and see if this was your problem by looking in your zpt_check file and examining the elevations in that area. ... Alternatively, given this post is a month old at this point, you may have already solved the problem! In which case I'd be interested to hear what it was, just so we can all learn. Good luck, Peter. PS. As a general comment (which may not even relate to this case if the problem was not as I described) I'd recommend either not trimming DTM data to a model domain, or if you really want to then trim it with a generous buffer. Otherwise, you end up with difficulties like this at boundaries on the perimeter of the model, or if you find yourself needing to tweak your model extent (perhaps some glasswalling turns up in your extreme flood events) then you'll have to re-generate your DTM. All of which could be avoided by just not going to the trouble of trimming the data in the first place.
  13. Sounds to me like there'd be benefit in the developers introducing a "2d_lsrfsh" (a "layered storage reduction factor shape") to build upon the functionality of a 2d_srf, in a similar way to how a 2d_lfcsh builds upon the 2d_fc! (please devs! ) In the mean time, I shall say that I'm not aware of a mechanism to do what you desire in 2D. If the detail of the flow route under the building isn't of too great an importance (simply that flow is permitted to pass through there and be stored there), then you could adopt a 1D-2D approach. You could adjust the 2D so that elevations are up at roof level for the building footprint and then create a 1D node with the appropriate stage/area relationship for under the building (check out 5.11, 5.11.1 and 5.11.4 in the 2017 manual; note that you can't set the area to be zero, so it's not a perfect lid, but you can make it something suitably small so that volume in your building is negligible) and link the node to the 2D around the building perimeter. There doesn't seem to be any documentation in the manual about the construction of the NA table beyond it being comma or space separated (please devs! ) but I'd guess it's stage in col 1 and area in col 2 (and if that doesn't work, try the other way round). The smaller you make that area above floor height, the more twitchy the model will get as flow enters or leaves the node, so be prepared to reduce your 1D timestep accordingly so the volume change over a single timestep won't be too large and trigger a overly large change in level... It's not a pretty solution, but I think it would do the job! If you do also need the actual flow paths, velocities under the building then it's not going to work for you though. Option 1 (press for the very rapid development of layered storage reduction and apply that in combination with the 2d_lfcsh you already have) would definitely be better (please devs! ). I'd be really interested to hear if folk have other ideas for how to approach this situation. - - - While I'm here though, I'll add that the setup you currently have should be getting both the velocities and afflux of the structure pretty much right; that's kind of what the 2d_lfcsh is designed for and the presence of the honeycomb simply enables the pressure to be correct across the system. It's only really the storage that's going to be thrown by this. Which brings an ugly hack to mind. Suppose your void under the building was 2m deep, and you expected you peak water level around the building to reach about 0.5m above the floor level (as determined by your current model), you could apply a 20% storage reduction factor to the cells under the building (that is, the water is trying to be 2.5m deep, but you only want storage for the 2m void, so remove the other 0.5m worth). This would mean that when peak flood levels we reached you would be representing the appropriate storage within the building! Your 2d_lfcsh would still be appropriate as you have it. The drawbacks are that: you would need a different SRF for each event you're considering, if the storage does have an impact on the modelled water level this could be iterative, (though if you were prepared to be conservative you could just apply a slightly larger than necessary SRF) this would have some impact on the results when the water hadn't yet reached the underside of the property - helpfully you already have some results without this influence under the building, so you could compare and see if this was having undesirable impact. And with that, I'll say good luck!
  14. While it would be nice to say that this is like water levels being displayed as higher than soffit level in surcharged pipe networks (in which case the displayed water level is essentially the pressure at that location, the static head; though the volume of water present in the pipe is limited to only the space available) I'm pretty confident this is not the case. My understanding is that lfcsh only applies to the cell sides, and hence only impacts on the movement of water. But the cell itself, as a storage bucket, remains unchanged and retains it's uniform plan area all the way from ground level to the sky, right through your building in-between. As long as the additional storage in your model isn't an issue, then this can still be interpreted as the pressure under the building (make sure you building isn't going to float away!) and all will be well. If the storage is an issue then I'm not sure what to recommend, but you'll need to represent things slightly differently. I hope that makes sense, but feel free to come back with further questions!
  15. Hi there, It's hard to say for certain without seeing more what you're looking at, but it may simply be that you're not running more than 10 simulations at once? A single simulation (in classic at least, things will be different in HPC) can only harness a single core worth of CPU. I would expect that it would be close to fully utilizing one core worth though, so perhaps when you're running 4 concurrent simulations it shows as using about 20% of your CPU? Sorry if this is all way too low level an answer and you're actually running 20 sims at once! If that's where you're at and it's using less than 50% of the CPU then you've a bottleneck in your hardware somewhere (possibly RAM speed..?); but I'll leave it to another more knowledgeable person to discuss that! Hope that's of some use, PHA.
×
×
  • Create New...