Enabling 3D mode allows for designing pictures with multiple layers, where some pixels protrude further out than others
This may not be a good fit for all images, but can work quite well in many cases
Supported image formats are dependent on your browser's compatibility
Due to the nature of the Lego Art sets, images with transparency aren't fully supported
Be careful when using high resolutions - this can cause performance issues on less powerful machines, especially during pdf generation and for 3D previews
Computing the depth map can be computationally expensive. Be prepared to wait a bit, and be careful, especially if you have a less powerful device.
How does this work?The depth map is computed using a DNN (deep neural network). For the reasons described in the 'about' section, everything is run entirely within the browser, using a modified version of ONNX.js. The model used is MiDaS - more specifically, the small ONNX version which can be found here.
This setting determines which algorithm is used to resize the image to the target resolution
● This section specifies how many pieces of each color you have available to create the image
● Color names are bricklink colors
● Step 4 of the algorithm cannot run unless you select enough pieces to fill the picture ('Missing Pieces' must be 0)
● If you're working with an existing set, then clear the available pieces and use the mix in option to add in the pieces from your set.
Required Pieces:Important: Infinite piece counts were used, since a linear error dithering algorithm was selected in the 'Quantization' section, or a variable piece type was selected in the 'Pixel Piece' section
Note: Any colors painted using the paintbrush are assumed to exist when infinite piece counts are enabled
Color | Number Available |
---|
Important: Since a variable size pixel type was selected, 'Infinite Piece Counts' under 'Available Colors' was enabled - be careful!
Available Dimensions:● This setting determines which distance function is used to align pixels to their closest Lego colors
● This determines what strategy is used for aligning pixels
● Some algorithms run faster than others - be careful when running the greedy algorithms on larger images
Important: Since this is a linear error dithering algorithm, 'Infinite Piece Counts' under 'Available Colors' was enabled - be careful!
It's often best to use 'Euclidean RGB' for the color distance function, for mathematical cleanliness
Changing this is useful if you have a large background but not enough pieces to fill it out uniformly in step 4
● Click a pixel to increase its height
● Click a pixel to decrease its height
Color | Dimensions | Number Used |
---|
Color | Number Missing |
---|
The type of piece used depends on 'Pixel Piece' under step 3
Frame elements are not exported
Longer instructions may be split into multiple files
Color names are Bricklink colors
Depending on your hardware and the resolution you've chosen, the pdf can take quite a few seconds to generate. Be prepared to wait if you're generating instructions for larger resolutions, especially for high quality pdfs. Larger resolutions may also cause some slowness on the page or may not work at all on less powerful devices, so I recommend starting at the default and then going up.
● This is a (very) rough preview of what the 3D effect might look like
● Hover your mouse over the image to vary the perspective
● Make sure your depth map is not blank
● This is unlikely to work well on less powerful devices, since this is generated dynamically
● Keep in mind that the effect varies from browser to browser, can be subtle, and may not be 100% representative of what the physical art piece would look like
● This is the set of plates that may be used to generate depth instructions and piece lists
● These pieces are used as padding so that the correct pixels protrude outwards
● Note that larger plates may be difficult to attach/detach from the base
Available Plates:Longer instructions may be split into multiple files
Depending on your hardware and the resolution you've chosen, the pdf can take quite a few seconds to generate. Be prepared to wait if you're generating instructions for larger resolutions, especially for high quality pdfs. Larger resolutions may also cause some slowness on the page or may not work at all on less powerful devices, so I recommend starting at the default and then going up.
Below is a recording of my tech talk from BrickCon 2021
If you're interested in understanding how this site works, the talk goes over the techniques and algorithms that were used
It also goes over some ideas that haven't (yet) been implemented within the tool, so it functions fairly well as a more general overview of the Lego mosaic space
You can find the slide deck used in the talk here (with some updates since the talk), and if the video doesn't load, you can find it directly on BrickCon's YouTube channel here
These are some other articles and videos featuring Lego Art Remix
Some are quite interesting even outside the context of this tool in particular, since they go into the history of Lego mosaics
Note that some were made when the tool was older
In 2020, The Lego Group released the
Lego Art
theme, which allows people to create a predetermined image using lego studs.
Lego Art Remix lets you upload your own image, and then uses computer vision to use the studs from a Lego
Art set that you already have to recreate the image.
This project is not affiliated with The Lego Group
The computer vision techniques used are pretty inexpensive (with the exception of optional depth map generation), and the resolutions being dealt with are naturally quite low, so as of the time of writing, the algorithm runs quite quickly. This allows for it to be run on the client, and on the machines that I tested, it ran in near real time.
The most computationally expensive part of the process, apart from depth map generation, is generating the instructions, since even pdf generation is done client side.
Since it runs almost entirely within the browser (see the source code), no image data is sent to a server and so it's very secure. This also makes it much easier for me to maintain and host. The only server code consists of simple increments to anonymously estimate usage for the purposes for tracking performance in case the static deployment needs to be scaled up, and for the counter in the about section.
Even the deep neural network to compute depth maps is being run entirely within the browser, in a web worker, using a modified version of ONNX.js. I've compiled a version of the library based on this pull request, with a small additional change I made to support the resize operation in v10. The model used is MiDaS - more specifically, the small ONNX version which can be found here.
As of the time of writing, I don't have all of the sets, and I haven't had much time to test. As a result, there's probably a few bugs, so let me know if you find any.
Algorithm improvement ideas are always welcome. Improvements that maintain the efficiency to within a reasonable degree would allow the algorithm to keep running on the client, which I really like.
Note: No user data is stored, so this is just aggregated info based on simple increments
Date | Images created |
---|