badmintonbaseba a day ago

I don't think your algorithm is correct. At least on the checkerboard example on the cube face the diagonals are curved. Perspective transformation doesn't do that.

Possibly you do the subdivisions along the edges uniformly in the target space, and map them to uniform subdivisions in the source space, but that's not correct.

edit:

Comparison of the article's and the correct perspective transform:

https://imgur.com/RbRuGxD

  • Karliss a day ago

    Considering that the author considers math below his pay-grade not a huge surprise that it is wrong.

    • frizlab a day ago

      YES! I was taken aback by that statement too. I think the opposite: in this age of AI, actually knowing things will be a huge bonus IMHO.

    • fc417fc802 20 hours ago

      > math below his pay-grade

      Completely backwards. Math is much more difficult than programming and LLMs still can't consistently add numbers correctly last I checked. What a strange attitude to take.

  • jeremyscanvic a day ago

    Also known as the good ol' straight lines remain straight in perspective drawing!

  • mistercow a day ago

    Even more obviously, the squares in the front aren’t bigger than the squares in the back. It looks like each square has equal area even as their shapes change.

    It’s fascinating how plausible it looks at a glance while being so glaringly wrong once you look at it more closely.

    • seveibar a day ago

      I've updated the article with the fixed projection transform! I had to make an animation as well just to validate it- I fooled myself!

      • jeremyscanvic a day ago

        The fixed rendering looks really nice. Good job!

  • seveibar a day ago

    Author here: I don’t think the commenter here has set the same focal length, the focal length can make a surface appear curved, I set it explicitly to a low value to test the algorithm’s ability to handle the increased distortion. You can google “focal length distortion cube” to see examples of how a focal length distorts a grid or you can google “fish eye lens cube” etc.

    Edit: I think there’s a lot of confusion because the edges of the cube (the black lines), do not incorporate the perspective transform all along their edge. The texture is likely correct given the focal length, and the cube’s edge is misleadingly straight. My bad, the technique is valid, but the black lines of the cube’s edge are misleadingly straight (they are not rendered the same way as the texture)

    • Masterjun a day ago

      I think the original commenter is correct that there is a mistake in the perspective code. It seems the code calculates the linear interpolation for the grid points too late. It should be before projecting, not after.

      I opened an issue ticket on the repository with a simple suggested fix and a comparison image.

      https://github.com/tscircuit/simple-3d-svg/issues/14

      • seveibar a day ago

        That admittedly looks a lot more correct! Thanks for digging in, i will absolutely test and submit a correction to the article (i am still concerned the straight edges are misleading here)! And thanks to the original commentor as well! I think I will try to quickly output an animated version of each subdivision level, the animation would make it a lot more clear for me!

    • jeremyscanvic a day ago

      I might be missing something but you sound genuinely confused to me. The perspective in your post is linear perspective. It's the one used in CSS and it doesn't curve straight lines/planes. It's not the perspective of fish-eye images (curvilinear perspective).

      • seveibar a day ago

        I was at least a little confused because yea fish eye isn’t possible with a 4x4 perspective transform matrix. I’m investigating an issue with the projection thanks to some help from commenters and there will be a correction in the article, as well as an animation which should help confirm the projection code.

  • ricardobeat a day ago

    Is it actually possible to draw the correct perspective using only affine transformations? I thought that was the point of the article.

    • badmintonbaseba a day ago

      It is possible to approximate perspective using piecewise affine transformations. It is certainly possible to match the perspective transformation at the vertices of the subdivisions, and only be somewhat off within.

      • itishappy a day ago

        With 6 degrees of freedom, you can only fit 3 2d points at a time. Triangulation causes the errors shown in the article, hence why subdivision is needed.

    • jeremyscanvic a day ago

      I think GP's point is that besides the unavoidable distortions coming from approximating a perspective transform by a piece-wise affine transform, the implementation remains incorrect.

gyf304 2 days ago

It’s worth noting that this same restriction of not being able to do perspective transformations is also one of the defining characteristics of PlayStation 1 graphics. And the workaround of subdivision is also the same workaround PS1 games used.

More reading: https://retrocomputing.stackexchange.com/questions/5019/why-...

  • bhouston a day ago

    It is also a limitation that many initial DOS 3D software rasterized games had (e.g. Descent.)

    This is because perspective transform requires a divide per pixel and it was too costly on the CPUs of the time, so they skipped it to get acceptable performance.

    • BearOso a day ago

      It's also commonly known that Quake only did a perspective divide every 16 pixels.

      It's funny that, in today's CPUs, floating point divide is so much faster than integer divide.

  • bn-l a day ago

    Huh that’s so crazy. I had that in my head as I was reading the article. I was thinking about some car game and the way the panels would look when it rotated in your “garage”.

JKCalhoun 2 days ago

Subdivision is a good trick.

A friend was writing a flight simulator from scratch (using Foley and van Dam as reference for all the math involved). A classic perspective problem might be a runway.

Imagine a regularly spaced dashed line down the runway. If you get your 3D renderer to the stage that you can texture quads with a bitmap, it might seem like a simple thing to have a large rectangle for the runway, a bitmap with a dashed line down the center for the texture.

But the texture mapping will not be perspective (well, not without a lot of complicated math involved).

Foley and van Dam say — break the runway into a dozen or so "short" runways laid end to end (subdivide). The bitmap texture for each is just a single short stripe. Now because you have a bunch of these quads end to end, it is as if there is a longer runway and a series of dashed lines. And while each individual piece of the runway (with a single stripe), is not in itself truly perspective, each quad as it gets farther from you is nonetheless accounting for perspective — is smaller, more foreshortened.

  • kibibu 2 days ago

    Perspective correct texture mapping has been solved for quite some time without excessive subdivision.

    It was avoided in the Foley and Van Dam days because it requires a division per rasterized pixel, which was very slow in the late 80s.

    • taylorius 2 days ago

      Back in the early 90s I did a version of Bresenham's algorithm that would rasterize the hyperbolic curves that perspective-correct texture mapping required. It worked correctly though the technique of just doing a division every n pixels and linearly interpolating won out in the end, if I recall.

    • rixed a day ago

      You could also avoid divisions entirely, while still keeping 100% correct perspective, by "rasterizing" the polygon following the line of constant Z. You would save the divs, but then you would draw mostly outside the cache, so not a panacea, but for large surfaces it was noticeably nicer than divide-every-N-pixcels approximation.

jesse__ 2 days ago

"it's a lightweight SVG renderer"

Meanwhile.. drawing 512 subdivisions for a single textured quad.

It's a cute trick, certainly, but ask this thing to draw anything more than a couple thousand elements and I bet it's going to roll over very quickly.

Just use webgl where perspective-correct texture mapping is built into the hardware.

  • seveibar 2 days ago

    The goal for this vanilla TS renderer is to have visual diffing on GitHub and a renderer that works without a browser environment. Most 3D renderers focus on realtime speed, not file size and runtime portability. I think in practice we will configure the subdivisions at something like 64 for a good file size tradeoff

    • kookamamie 2 days ago

      Why use SVG for this, though? This could be easily implemented as pure JS software rasterizer without all the tessellation workarounds.

      • ricardobeat a day ago

        > The goal for this vanilla TS renderer is to have visual diffing on GitHub and a renderer that works without a browser environment

        • itishappy a day ago

          This doesn't answer the question. If you're doing all this work in JS to render a static SVG, why not just "do it right" and output a static PNG instead?

          • seveibar a day ago

            The top of the PCB (the lines etc) are computed as an SVG, i would have to have an SVG rasterizer just to begin with that approach, then would be limited by what images I could rasterize. It would also be much much slower than quickly computing matrices

shaftway a day ago

Neat technique.

I was on the original SVG team at Adobe back in '00 and built some of the first public demos that used the technology. This kind of 3d work was some of the first stuff I tried to do and found it similarly lacking due to the lack of possible transforms. I had some workarounds of my own.

One demo had a 3d stack of floors in a building for a map. It used an isometric perspective (one where parallel lines never converge) and worked pretty well. That is pretty easy and can be accomplished with rotation and scaling transforms.

The other was a 3d molecule viewer where you could click and drag around to view the structure. This one basically used SVG as a canvas with x and y coordinates for drawing. All of the 3d movement was done in Javascript, computing x and y coordinates and updating shapes in the SVG DOM. Styles were used to handle single / double / triple bonds, and separate groups were used to layer everything for legibility.

exabrial a day ago

I hope someday where we get back to a simple HTML/CSS standard for "text" pages and that's it. No JavaScript, no DOM. This covers 70% of the web use cases.

"Everything else" would be a pluggable execution runtime that are distributed as browser plugins: [WASM Engine, JVM engine, SPIR-V Engine, BEAM Engine, etc] with SVG as the only display tech. The last thing we'd define is an interrupt and event model for system and user interactions.

iamleppert a day ago

What does he think SVG is doing under the hood? Rasterization. Everything does rasterization at some point in the process. Calculating 512 clip paths to render a single quad that could be drawn in a single for loop is insane.

  • itishappy a day ago

    SVG has no concept of 3d space so you'd have to write your own SVG rasterizer if you want it to render perspective.

    • leptons a day ago

      SVG is the wrong tool for this job.

    • rixed a day ago

      ...and transfer all those pixels to the browser.

laszlokorte 2 days ago

Very cool! Just just implemented an SVG 3D renderer a few weeks ago [1]. But I did not implement texturing yet and wondered how one could do this.

[1]: https://youtu.be/kCNHQkG1Q24?si=3VxfVFtG2MiEEmlX

  • seveibar a day ago

    Your renderer looks awesome! I was surprised there wasn't an "off the shelf" SVG renderer in native TS/JS, it's a big deal to be able to create 3D models without a heavy engine for visual snapshot testing!

  • CrimsonCape a day ago

    When you loaded Suzanne, my eye could detect framerate drop when moving the model. What is the hot path in the calculations?

    • laszlokorte a day ago

      The implementation shown in the video is actually particularly slow because all the geometric transformations are implemented in terms of lenses/optics ([1]) and ramdajs ([2]). So the whole mess is a gigantic stack of nested, composed and curried functions, instead of raw linear algebra (just becaus I could).

      I later optimized the hotpath and it is significantly faster (still miles behind webgl/webgpu obviously). You can try yourself if you scroll alll the way to the veeeerrrry bottom here [3].

      [1]: https://github.com/calmm-js/partial.lenses [2]: https://ramdajs.com/ [3]: https://static.laszlokorte.de/svatom/

  • badmintonbaseba a day ago

    An other approach would be to apply the transformation to SVG elements separately. Inkscape has a perspective transformation tool, which you can apply to paths (and paths only). It probably needs to do approximation and subdivision on the path itself though, which is possibly more complex.

rollulus 2 days ago

I’m afraid your CSS triangles are still rendered through rasterization but a good job nonetheless.

  • bufferoverflow a day ago

    But he isn't limited to one specific resolution. If he used PNG, he would be limited.

unwind 2 days ago

Very nice-looking for being SVG!

One possibly uncalled-for piece of feedback: is that USB-C connection finished, and is it complying with the various detection resistor requirements for the CCx pins? It seemed very bare and empty, I was expecting some Rd network to make the upstream host able to identify the device. Sorry if I'm missing the obvious, I'm not an electronics engineer.

See [1] for instance.

[1]: https://medium.com/@leung.benson/how-to-design-a-proper-usb-...

  • seveibar a day ago

    Because it’s only being used for power and doesn’t need a lot of power, it works for the simple board we rendered. In practice you would absolutely want to set the CC1 and CC2 configuration with resistors!

weinzierl 2 days ago

This is a cool project and I think I can use that. I was just wondering if perspective correctness was all that important for a PCB renderer? The distortion should be minimal for these kind of images and I think old CAD programs often did not use correct perspective as well.

  • seveibar a day ago

    We could absolutely use isometric projection, but personally I find them a bit hard to visually parse.

JKCalhoun 2 days ago

Some wild stuff about "defs" that I was unaware of in SVGs.

  • seveibar 2 days ago

    Defs saved the day here on file size- repeating the image (which we usually base64 encode) would have caused a much larger file size and made rasterization much more appealing!

itishappy a day ago

Since the final SVG will have a set perspective and still requires rendering... What's the benefit over rendering an image?

  • seveibar a day ago

    Very small files and a much simpler rendering scheme! I don’t have to rasterize my SVGs that represent the top of my board

    • itishappy a day ago

      > Very small files and a much simpler rendering scheme!

      For a 400x400 SVG with 6 surfaces and 64 subdivisions your file size is only 10x smaller than an uncompressed bitmap. Your SVG should scale linearly with number of objects and be constant with resolution, while an image would scale with the resolution (quite favorably if compressed) and be constant with the number of objects. I'd be interested to know the size of the example at the top of the article.

      Also you already have the math to transform points!

      > I don’t have to rasterize my SVGs the represent the top of my board.

      Ahhhhhh. This clears it all up!

m-a-t-t-i a day ago

Interesting, I've been doing 3D SVG by storing the xyz-coordinates in a separate array and using inlined javascript to calculate & refresh the 2D coordinates of the SVG items themselves after rotation. But this means that the file only works in a browser. Maybe it could be possible to replace the javascript with native functions, so the same file would work everywhere.

  • badmintonbaseba a day ago

    How do you transform paths? Do you just transform the control points?

    • m-a-t-t-i a day ago

      Yeah, paths are saved in an array where each path segment is a list of control points coupled with the corresponding path command (M, L, C). Those can be used to recreate the path item.

dedicate 2 days ago

This is seriously cool! I've always mentally boxed SVG into the 2D corner, so seeing it handle 3D projection like this is pretty mind-bending...

rixed a day ago

This is nice, but the article left me unconvinced that you need textures at all. Be it a checker or the drawing on a circuit board, can't you keep everything as vectors, thus avoiding the problem entirely?

  • seveibar a day ago

    Circuit boards have holes, cutouts and import STL/OBJ components that we'll eventually support in this 3d renderer. Assuming we get that far I may have to rename it from "simple-3d-svg"!

    • leptons a day ago

      I think you'll probably run into performance problems with SVG before you get too far. I can't imagine SVG will perform fluidly with complex circuit boards.

      SVG elements are DOM elements after all, and too many DOM elements will cause browser performance issues. I know this the hard way, after adding a few hundred SVG <path> elements with a few hundred <div> elements in a React-based interactive web application, I ended up needing to move to a canvas solution instead, which works amazingly well.

      I really hope you have all that figured out, because I don't think it's going to work well using SVG to render complex circuit boards. But maybe your product is only working with very simple circuit boards?

stuaxo 2 days ago

Awesome. If this gets really popular I could imagine perspective transforms being proposed for SVG itself.

  • chrismorgan a day ago

    I’m not certain, but I think Firefox just implemented 3D transformations for SVG from the start. It wasn’t exactly hard to conceive. Certainly by mid-2017 it had it. Somewhere around that time there was also concerted effort toward aligning SVG and CSS.

    (Firefox’s implementation does still suffer from one long-standing bug which means you want to make sure your viewbox unit is larger than one device pixel, but that’s normally not hard to achieve. https://oreillymedia.github.io/Using_SVG/extras/ch11-3d.html... shows what it’s about. I don’t really understand why that problem isn’t fixed yet; what I presume is the underlying issue affects some HTML constructs too when you scale things up, and surely it’s not that rare? I know I found one such problem a decade ago (and, being in HTML, it couldn’t be worked around like you can with SVG). They’ve improved things a bit, but not entirely.)

    Sadly, no one else seemed all that interested in making 3D transformations work properly in SVG content.

  • moron4hire a day ago

    Three.js has had an SVG rendering back end for 13 years. It's going to be pretty hard to get much more popular than Three.js to get over the browser vendors' reluctance to make any changes to SVG.

est 2 days ago

I remember someone made a 3D renderer in IE5.5 using csss border triagles. Voronoi diagrams and stuff.

ndgold 2 days ago

What’s the sota for 2d object diagrams to 3d cad output?

leptons a day ago

When things like Three.js exist, developing an SVG 3D engine to display circuit boards seems like a ridiculous thing to do.

Why did you feel you had to do this with SVG?