Looking for huge (mpd) models


Looking for huge (mpd) models
#1
I'm looking for some huge (jet serious / realworld) LDraw models. I'm talking 10.000+ pcs in MPD format.

Does anyone know of good websites offering LDraw files of such mocs ? Or maybe you got some of your own.
Reply
Re: Looking for huge (mpd) models
#2
The copy of Datsville that I have has 37,655 parts (according to LDView's HTML parts list). I'm not exactly sure where (if anywhere) this lives on the Internet, though.
Reply
Re: Looking for huge (mpd) models
#3
datville might a be a bit too large Smile

I have "town_full_cleaned.mpd" from nov 2009 don't know the exact brick count but it seems incomplete. (I've gotten it from the svn a year ago orso).

I don't think it's 37000 parts though (it's doing 10fps in LDCad 1.1b while a 5000+ model does 30fps). Where did you get your version?
Reply
Re: Looking for huge (mpd) models
#4
I don't remember where I got my version, but it's not the boxified version. You might have the boxified version (where bricks with bricks above and below them have been converted into boxes). On my work computer, LDView displays it at 4.6FPS with default settings (7.5FPS with stud textures turned off). For comparison, LDView on that same machine display's Orion's 10030 model (3020 parts) at 15FPS (18.7FPS with stud textures turned off). So despite the high part count of datsville, the part complexity is relatively low.
Reply
Re: Looking for huge (mpd) models
#5
It's not the boxed version (no .box references in the mpd), it just seems my rendering code scales weird (in a good way).

Anyway the whole reason I'm in search of more large models is so I can compile a nice test set for benchmarking when I start tinkering (after the flexible part stuff) with a full non fixed pipeline rendering implementation for LDCad (bigger models are better to see overall performance differences).

So any other (complex) big moc's are very welcome.
Reply
Re: Looking for huge (mpd) models
#6
I build a file for you. LDView crashed on combiling. MLCad works.
I assume that that approx. 11.000 pieces are used.


Attached Files
.mpd   bigfile.mpd (Size: 1.48 MB / Downloads: 3)
Reply
Re: Looking for huge (mpd) models
#7
Thanks, but I was kinda hoping for single models instead of combined scenes.

Although this seems to be a good test scene, it does ~32fps when zoom to fit. I now also realized why my renderer scales weird, it's because in big scenes edges are not drawn unless you zoom in on e.g. the 3 white ships (fps drops to 13 in that case). Still not bad for 11,000 pieces Smile

I'm really curious if a 100% custom shader approach (non fixed pipeline) would perform better on modern VGA cards.

ps: is this file combined with the new MPD center? Is that where the ~*~ bits come from, cause * isn't allowed on most filesystems.
Reply
Re: Looking for huge (mpd) models
#8
Code:
ps: is this file combined with the new MPD center? Is that where the ~*~ bits come from, cause * isn't allowed on most filesystems.
Yes, first i had && as delimiter but that gave trouble on web pages. Any good suggestion for the delimiter is highly wanted.
Reply
Re: Looking for huge (mpd) models
#11
Why not the good, old fashioned, system neutral, fairly standard ___ (however many underscores you desire)?

Tim
Reply
Re: Looking for huge (mpd) models
#12
It needs to be a combination that is for near 100% not used in any other filename. Therefore the underscore is not my choice.
Reply
Re: Looking for huge (mpd) models
#13
In that case _-_ or -_-. Neither is likely to be used. Or __-__ to be really safe.

Any combination of _ and - (and ~ for the matter) has the advantage of being both POSIX and DOS safe. Which covers just about any OS.

Tim
Reply
Re: Looking for huge (mpd) models
#14
I just looked into my code. If nothing else is specified in the configuration file (only changeable by editing the file) then three underscore are used Smile
As I did not get any negative feedback this might be a good solution.
Reply
Re: Looking for huge (mpd) models
#9
Better late then never. I read your post and thought of my model of Sinterklaas.
Enjoy!


Attached Files
.mpd   Sinterklaas.mpd (Size: 1.54 MB / Downloads: 4)
Jaco van der Molen
lpub.binarybricks.nl
Reply
Re: Looking for huge (mpd) models
#10
Very nice, thanks.

One question though, why are half of the bricks mirrored? At first I thought you drawn half the model and mirrored the whole submodel, but it seems every brick it self is mirrored.
Reply
Re: Looking for huge (mpd) models
#20
Roland Melkert Wrote:One question though, why are half of the bricks mirrored? At first I thought you drawn half the model and mirrored the whole submodel, but it seems every brick it self is mirrored.
I am not sure but I think I used the Group function in MLCad and mirrored that.
I also used the Duplicate funtion a lot, so chances are that I copied a mirrored part and used it on the other side.
Jaco van der Molen
lpub.binarybricks.nl
Reply
Re: Looking for huge (mpd) models
#21
I see, anyway it makes for a nice test case with my new rendering code (I'm planing realtime mirror correction)

It also made me think about adding a tool to mass correct these kind of mirrored parts in a model.

Thanks again for this >2 meter high sint model Smile
Reply
Re: Looking for huge (mpd) models
#15
Hi Roland,

I have a parking garage MOC - a single MPD file with 32 sub-sections that is currently 23,019 parts.

If you can accept a scene that is a collection of MPD files in single directory, the airport to which the garage belongs is currently 34,963 parts.

The garage renders at about 8 fps, and the whole airport renders at about 4.5 fps. On the same machine, Datsville (the version I have is around 39k parts?) runs at 5.5 fps.

Email me at bsupnik at gmail dot com and I can send them to you.

Re: performance, I found a few things with Bricksmith:

* A shader-based pipeline wasn't faster than fixed-function for the same basic work.
* CPU time is proportional to the number of bricks, not the number of vertices.
* When using the programmable pipeline and instancing techniques, BrickSmith is vertex-count bound, not CPU bound. It's not even close -- even with culling and transparent part sorting, I see CPU use of 15-25% while maxed out on fps with 30-40k parts. In this case the interesting number isn't the part count but the vertex count those parts yield. This is with a 4870 - a newer GPU would help the vertex-count bottleneck.

I wrote this up when I finished some perf measurements...

http://www.hacksoflife.blogspot.com/2013...smith.html

cheers
Ben
Reply
Re: Looking for huge (mpd) models
#16
I would like to try that scene, I''ll contact you shortly.

Ben Supnik Wrote:Re: performance, I found a few things with Bricksmith:

* A shader-based pipeline wasn't faster than fixed-function for the same basic work.
* CPU time is proportional to the number of bricks, not the number of vertices.
* When using the programmable pipeline and instancing techniques, BrickSmith is vertex-count bound, not CPU bound. It's not even close -- even with culling and transparent part sorting, I see CPU use of 15-25% while maxed out on fps with 30-40k parts. In this case the interesting number isn't the part count but the vertex count those parts yield. This is with a 4870 - a newer GPU would help the vertex-count bottleneck.

I wrote this up when I finished some perf measurements...

http://www.hacksoflife.blogspot.com/2013...smith.html

Very interesting blog post, although I'm not that deep into OpenGL and shaders it gave me some nice pointers to try with the LDCad renderer. I too have an VBO per 'core' brick. which is drawn using drawelements per brick reference. I also do some additional grouping / sorting to limit the state changes though. I can tell you that the flat shading with the unique vertex normal pairs is about 15-20% faster (unique testing is a option in LDCad), almost the same amount as the vertex reduction, confirming the point you making in your text. Although I always use drawelements even when there are no duplicate pairs (haven't tried using drawarrays when the option is disabled yet).

Anyway the main reason I want to try a non fixed approach is to supply nicer lighting (per pixel) and or (better) noticeable differences between rubber, metal and plain plastic. I'm also hoping it would allow for better bfc/culling handling (flipping normals in the shader, to account for submodel mirroring etc) like you suggest on the blog.

It would be a bonus if it's faster too Smile Everything will remain optional anyway because LDCad renders on OpenGL as low as 1.1 and I would like to keep it that way.
Reply
Re: Looking for huge (mpd) models
#17
Hi Roland,

For BrickSmith I am planning to move the new rendering code from glDrawArrays to glDrawElements after I implement part smoothing, so that the vertex sharing for draw-elements can be higher.

I think that whether draw-elements vs draw-arrays is faster depends on which part of the GPU pipeline surrounding vertex/triangle processing is bogging down and how transformed vertices are cached. But either way if that's the bottleneck, the next answer is level of detail. Datsville turns into a 125,000,000 vertex model for 39,000 parts; when drawn in a window that's something like 125 vertices _per pixel_...not a good ratio. :-)

One other note: before using shaders, BrickSmith had to compute a VBO for each part in each color that was used for parts that use the 'current' color. For example, for the 2x2 plate with red wheels there'd be a gray & red version and a black & red version stored in two VBOs if the user placed the part twice with different colors.

With shaders, the shader uses a special RGBA value as a place-holder for "use the current color" - the mesh can thus encode the part as it is in the library: red wheels and "current color" plate. Only one VBO is needed, and thus that VBO can be used twice as often, resulting in fewer VBO binding changes (those aren't cheap) and higher instancing counts. It also simplifies the code a bit.

cheers
Ben
Reply
Re: Looking for huge (mpd) models
#18
Ben Supnik Wrote:For BrickSmith I am planning to move the new rendering code from glDrawArrays to glDrawElements after I implement part smoothing, so that the vertex sharing for draw-elements can be higher.

Isn't the amount of shared vertices too low to justify all the uniqueness tests with smoothed meshes ? Although it's only done once some parts (especially base plates) can take quite some time to determine all unique vertices. This becomes even more worrying when texture coordinates get involved.

Ben Supnik Wrote:I think that whether draw-elements vs draw-arrays is faster depends on which part of the GPU pipeline surrounding vertex/triangle processing is bogging down and how transformed vertices are cached. But either way if that's the bottleneck, the next answer is level of detail. Datsville turns into a 125,000,000 vertex model for 39,000 parts; when drawn in a window that's something like 125 vertices _per pixel_...not a good ratio. :-)

Yes, occlusion testing might help with this issue, but it isn't easy to implement as far I've looked into it.

Ben Supnik Wrote:One other note: before using shaders, BrickSmith had to compute a VBO for each part in each color that was used for parts that use the 'current' color. For example, for the 2x2 plate with red wheels there'd be a gray & red version and a black & red version stored in two VBOs if the user placed the part twice with different colors.

I use a single set of VBO's per brick (triangles, edges, indices), these have no color component at all. I just issue a plain glColor before the drawelements call. Only weird thing is the stock open source (readeon) driver on Ubuntu (10 .. 12) executes this wrongly for multiple colored parts some how. But I'm fairly sure that's a driver bug (AMD's own driver doesn't have this issue).
Reply
Re: Looking for huge (mpd) models
#19
Roland Melkert Wrote:Isn't the amount of shared vertices too low to justify all the uniqueness tests with smoothed meshes ? Although it's only done once some parts (especially base plates) can take quite some time to determine all unique vertices. This becomes even more worrying when texture coordinates get involved.

I'm not sure since I haven't written the code; but it's a necessary step for smoothing - the unique-ing comes for free. I'm hoping a good vertex hash map or tree will solve that problem. (Time would be O(N) or O(NlogN).

If search time is a problem, another option would be to uniqe/smooth the sub-models first and then assume no sharing in the main models. For example, the 1024 studs on 3811.dat all come from stud.dat. If stud.dat is pre-unique and pre-smoothed (very cheap) this data can be recycled as-is. But I am going to try brute force first as it fits the architecture of BrickSmith better.

Quote:Yes, occlusion testing might help with this issue, but it isn't easy to implement as far I've looked into it.

Agreed - occlusion culling is difficult even in games where very specific data can be pre-computed offline to make the tests work well. Given an ever-changing user composition I don't see a win. :-(

Quote:I use a single set of VBO's per brick (triangles, edges, indices), these have no color component at all. I just issue a plain glColor before the drawelements call.

How do you handle parts like 122c01, where the the plate is user-colored, the wheels are red, and the axle is..metalic I guess? In BrickSmith the color is tagged as a mix of red, metalic, and user-set. If you pass immediate mode color I would think you'd need (at a minimum) 3 draw calls and some side data to tell you which part of the VBO gets which color.

Cheers
Ben
Reply
Re: Looking for huge (mpd) models
#22
Ben Supnik Wrote:How do you handle parts like 122c01, where the the plate is user-colored, the wheels are red, and the axle is..metalic I guess? In BrickSmith the color is tagged as a mix of red, metalic, and user-set. If you pass immediate mode color I would think you'd need (at a minimum) 3 draw calls and some side data to tell you which part of the VBO gets which color.

That's pretty much how I do it using index offsets. The possible multiple draw calls aren't that bad because 99% of parts are single color (16) anyway, but is nice to support multicolored parts using a single vbo set without having to add huge amounts of (duplicate) color information to them.

You must also take into account that I wrote the first version of my render approach for LD4DStudio, which was in a time graphic cards didn't have enough memory to store the whole LDraw library ten times over Smile.
Reply
Re: Looking for huge (mpd) models
#23
Ben Supnik Wrote:
Roland Melkert Wrote:I use a single set of VBO's per brick (triangles, edges, indices), these have no color component at all. I just issue a plain glColor before the drawelements call.

How do you handle parts like 122c01, where the the plate is user-colored, the wheels are red, and the axle is..metalic I guess? In BrickSmith the color is tagged as a mix of red, metalic, and user-set. If you pass immediate mode color I would think you'd need (at a minimum) 3 draw calls and some side data to tell you which part of the VBO gets which color.

Note: it's been a really long time since I wrote LDView's current rendering code, so I could be mis-remembering.

I think LDView behaves similarly to what Roland describes. It splits the geometry into two VBOs per part: one with hard-coded colors, and one for geometry of color 16. It then uses glColor before drawelements for the color 16 geometry, and none of that geometry includes color data in the vertex data. For the hard-coded colors, the vertex data includes the color data, and that is drawn in a second call. So, there will always be two calls for parts that contain non color 16 geometry, but only one call for parts that are purely color 16.
Reply
« Next Oldest | Next Newest »



Forum Jump:


Users browsing this thread: 1 Guest(s)
Forum Jump:


Users browsing this thread: 1 Guest(s)