The new free “All Purpose Strand” is a SuperFly ‘universal shader’ for Poser 11’s Hair Room strand hair and fur.
My saved PDF copy of the instructions. New textures can be easily plugged into the shader.
I just wanted to make a quick test of the Blender Decimate ‘decimator’ feature. A free decimate, aka poly-reduction, aka mesh reduction. What could be nicer? Especially as the free Meshlab 2020 is still flaky and very crash-prone.
First of all, where is the Decimate panel in Blender? Everyone, including the official manual, shows you a nice picture of the Decimate panel itself. “Just click the Spanner icon…” reveal a couple of rather more helpful YouTube videos. But that was no use to me, because there was no Spanner icon…..
Bizarrely, the spanner icon then does appear and it persists, but only when the items in the scene are deselected and re-selected. No other change had been made. Go figure…
Anyway, with this “missing icon” hassle fixed, the user can finally select “Add modifier” and choose to load the Decimator panel…
Once the Decimate panel’s UI is visible there’s some weird stuff going on in it. You expect to see a normal poly-count and a nice smooth dial to dial this down. But you get “faces” instead of polys. I know this is a 2.3m poly .OBJ, so Blender telling it has a 73,000 “Face count” is not at all helpful.
Then there’s the reduction dial… which is not labelled as such and is not a dial, as it’s just horrible to try to operate with a mouse. There’s also no indication that its “1.0” setting is actually 100%. The best way to operate this awkward cryptic pseudo-dial appears to be double-clicking on it and just manually typing in a desired setting. Turns out that “0.18” = 18% of the original.
You can see the mesh modifying in real-time, which is something.
Then, buried deep in a forum, is the advice that you have to save the .BLEND file before you export, or you won’t get the reduction. Another bit of fairly vital advice missing from other sources, including the Manual.
So then I export to a new .OBJ file, with “Apply modifiers” ticked in the OBJ export panel. This takes five minutes, during with the PC becomes deeply unresponsive. And what do I get from that? A 174Mb OBJ, 50Mb bigger than the original!
Ok, so it’s back to search and the forums. Turns out you have to not only save the file but also, in the Modifier panel…
“Apply the modifier using the apply button on the modifier.”
“Modifiers can be permanently applied by clicking ‘Apply Modifier’ while in Object mode.”
OK. Where’s that button then, because it is definitely not on the panel I see…
Apparently it should be under the “Add modifier” label.
Does it show up if I move from Object to Edit mode? Nope. if I switch away to a new tab and then back again? Nope. If I close Blender and re-open. Nope.
Turns out this button has been recently removed(!) to be hidden under a fiddly little drop-down arrow…
This drop-down at least usefully informs me that I had switched to the wrong mode. Switching back to Object mode caused the “Apply” to become active…
On clicking the live “Apply” the Decimate Modifier panel vanishes, the items in the scene are deselected, and in the top item there’s a new little circular icon near the name. I assume it’s applied.
OK, so I save the .BLEND file a third time. Then save out as an OBJ, “Apply modifier” once again ticked in the export panel. Things go a little more smoothly now, though the export was still about three minutes. The only problem was… the resulting OBJ was was just as large as before at 174mb!
So, three hours later, I still had no decimated circa-20Mb .OBJ out of Blender. I gave up. Blender appears to have some very nice developing NPR and real-time features… but is still an utter pain in the arse to actually try to use.
Now I know why people who want to decimate OBJs a lot pay $50 for the speed and ease of Atangeo Balancer.
There has long been a persistent and out-of-the-blue problem periodically had by a few users of DAZ Studio, that appears to stump forum helpers every time. It’s often wrongly assumed to be a graphics drivers problem. That can be true if the user recently upgraded to a new version of DAZ Studio that requires new card drivers for CPU rendering, but does not apply if no such change was made.
Symptoms: The user has not recently upgraded DAZ Studio to the latest version. An unspecified “Error during rendering” has caused iRay to fail, with that rather unhelpful message. If the user tries to close rendering and render again, they are told that iRay is already in use (even though it has definitely closed). Other render modes are still possible.
Solution: A little digging with my handy new Technical Search-Engine suggests that the only one who got it right was one ‘JaeAlexisLee’ in 2015, who painstakingly went through every damn thing that could be wrong with the wayward scene. At last it was found…
“Found it! Occurs if the Render SubD level is higher than 1!”
Where to find these settings: I’ve tested this solution, and it does indeed cure the problem instantly and perfectly. So, here are where you find the settings you need to change in DAZ Studio 4.12…
Manually putting the above settings back to Base | 0 | 0 and saving the scene then enabled an iRay render to run again fine. I then switched them up to High Resolution | 1 | 1 for better resolution, saved the scene again, and once again DAZ Studio rendered fine in iRay. The “Catmark” setting appears to be harmless.
HD morphs presets as probable cause: I suspect these settings were silently changed on loading one or more HD Morph presets from the Library, and that such things need high settings in order to accommodate the morph dial’s need for lots of polygons in the Genesis figure mesh. Possibly those with amazingly powerful graphics cards can handle such HD things, so it may indeed be a RAM problem ‘of a kind’. But many DAZ Studio user will not be able to run their figure at that high level.
I can now report on my experiment with a gig on Fiverr that offered to “Download an entire website from the Internet Archive Wayback Machine”. You can only choose one date to grab, and hope for the best. I had picked a date in late 2015.
Joseph came through and provided a 310mb .ZIP containing 1.1Gb of archive that Archive.org had backed up from the RDNA forums. 3,000+ forum threads, plus individual posts, back to 2005. It’s something I’d have never have obtained otherwise, since it involves typing in command lines and other ikky stuff. He knows how to do it, and is happy to do so for just £6. I’ll use him again for this sort of thing.
I didn’t expect Joseph to re-link everything, and make the RDNA forums ‘as if on a working site again’ — because it was a .PHP driven site. For that reason, it’s highly unlikely to ever go online again as a working site.
Once the delivered .ZIP was extracted I then had dtSearch index all text in the files, thus providing desktop keyword-search across the archive. If you’re following in my footsteps, and need a free desktop search tool, then DocFetcher is a good freeware equivalent to the paid dtSearch.
Here’s what I get on a test search across the archive. Definitely worth $6 to save this sort of knowledge, I’d say…
Possibly some of the Poser official forum was later ported to Smith Micro, but I also see Okham stuff back in 2005, on this search. It’s likely only a partial capture, but it’s pretty large.
Update: Community archive of the old Runtime DNA forums at the Internet Archive.
Heavy Vue users may also be interested in grabbing the old and vanished Cornucopia forums. Does anyone have the URL which those used be located at? Update: found it at cornucopia3d .com / forum / index.php — but Archive.org does not appear, at any point in time, to have archived any of the actual posts. Just the names of the forum threads.
I found gnuThumbnailer 1.2. It’s a free thumbnail generator and batch image-processing tool from 2015, in English and German.
What can it automatically apply to a batch of thumbnail renders?
Images can be stained (sepia effect, grayscale, user-defined color).
The colors of images can be overlaid.
Can make images lighter or darker.
You can add a colored or a transparent border to your images.
You can add a watermark to your pictures.
Very nice, especially since I can’t get the Netherworks Thumbnail Designer to work, which I bought a couple of years ago. gnuThumbnailer appears to do more or less the same thing as Designer, albeit working outside of Poser.
gnuThumbnailer is free for non-commercial use, and is pay-what-you-want if you use it for commercial work. It’s for Windows and Linux and requires Java, but it has a portable version (which means, if a future Java upgrade breaks the software, as can happen, the Portable should still keep running because it bundled the required Java runtimes).
So, Poser content makers reading the above description will see the options here, for helping to automate the making of standard 91 x 91 pixel Poser thumbnails in 16-bit .PNG, but… with your own border and brand overlay easily added via an automated batch process.
Ah, but how to get a nice set of uniform thumbnails for gnuThumbnailer to work on? What you want there is D3D’s batch “Render Content” script that ships for free with Poser 11, as an ‘official partner’ script. This is found at… Top Menu | Script | Partners | Dimension3D.
This script appears to be a expansion of what used to be his “Batch render poses” script (which no longer loads for me in Poser 11, even with AVfix). The new “Render Content” can make batch renders from a set of Poser poses, and there’s also a slot to load a reset pose in…
There’s no Help or entry in the Poser PDF manual for this, but it’s fairly easy once you’ve spend 20 minutes figuring out what the buttons do and testing it. Your Poser pose sets should be found down on the folder path…
… though a few freebie makers may have used ..\Poses But I guess if you’re a maker you have have saved out the finished poses saved out elsewhere, in a production folder.
The script does not also move the camera to frame the applied pose, which means some poses may fling the figure out of view of the camera. You might want to hack it, so that the figure is placed back at the scene origin before rendering.
The batch “Render Content” script then uses your current render settings, so be sure you don’t set it running on 100 SuperFly renders at 4000px, or your PC will be running through that for days. The script has no QUIT! option, once the renders are underway.
Right, so after more research I find that there are in fact two ways to have Photoshop play “spot the differences” across an image stack, and to extract those differences. This solution relates to my previous post on Poser 11’s Comic Book Preview lineart, and the desire to fix small breaks and missing ‘chips’ of inked line.
I had discovered that a click of the current Display Mode’s ball-icon subtly randomises the coverage and break-patterns of the line-art available from Comic Book Preview in Poser 11. I was thus looking for a Photoshop plugin to combine multiple PREVIEW renders made in this way, but I now find… the feature is already native in Photoshop and can be done via two different methods!
First you’d assemble your slightly variant lineart renders, made as described here. 1.png through 6.png should do it, though you could go to twelve renders if you wanted to be really thorough.
Top Menu | File | Scripts | “Load files to Stack” (Stack is 64-bit only, do not tick “Automatically Align”). Shift-select all layers in the Layers Palette. Top menu: Edit | Auto-blend Layers | Stack Images. That’s it. Photoshop will then whizz through the stack as fast as a whizzly weasel, spot the differences, and produce a single unified image containing all those small differences.
The result is not as adjustable later, since Photoshop does it a weird way and you don’t get discreet cutouts on new layers, each containing ‘just the bit that was different’.
I never knew this feature existed. Never heard it mentioned in umpteen years, or had cause to use it. It’s generally though of as a focus-stacker for macro photographers.
1. Top Menu | File | Scripts | “Load files to Stack” (Stack is 64-bit only, do not tick “Automatically Align”).
2. Select top layer in the Layers Palette. Set its layer blending mode to Difference. Invert. ‘Select Colour Range’ – white. White highlights the bit(s) different from the layer beneath it.
3. ‘Copy merged’ and ‘paste in place’ your selection to a new layer at the top of the stack, invert it to return it to black. Ensure the new layer’s blending mode is set to Normal.
4. Delete the source layer and have the Action move down to the next layer in the initial stack. Repeat.
Eventually you have a base layer and all the differences are isolated, extracted and stacked on top. This stack of differences is more adjustable than the results for method one, for instance allowing you to delete a layer with a manky eye if needed. However, it’s probably not as computationally precise compared to Method One, because the process can only ever compare with the layer beneath it. Fragments may also have slight unwanted fringing around them.
In terms of speed, once you have Method Two as an Action they’re both about the same speed.
Either method is still not going to automatically fix tiny hairlines running across eyeballs, or noses that have double lines or broken ridges when seen at certain angles.
A free Python script Copy Dynamic Hair Room Settings…
Prompts the user to select a dynamic strand-hair prop as a template, and copies its parameters to the currently selected dynamic strand-hair prop. If no HairProp is chosen from the list, but OK is clicked, the script defaults will be applied.
1. Install the fur items from Rosemary. I had decided to amalgamate the M4 props under the fur cape location. Thus they were all found at…
2. Load one of her furred items, and place it alongside a target prop. In this case my target was a mask.
3. The hair to hair settings-transfer script of course requires… hair. Any old hair, so to add it to the mask: Select Prop Hair Room | ‘New Growth Group’ | ‘Edit Growth Group’ | Select All | exit panel | Click ‘Grow Guide Hairs’ with default settings. Your mask prop should instantly be super-sprouting with default guide hairs!
I don’t know of any PoserPython script to do all of the above basic hair setup automatically, though it should be possible. Thus a visit to the Hair Room is still needed with this script.
4. OK, now I selected the group of default guide hairs on the mask. The script was run. The other hair groups in the scene became available for selection for transfer, via a simple drop-down list.
5. In an instant, the script then copies over the selected hair’s settings to the mask. Here we see the rendered result of a transfer from one of the hair groups used to make Rosemary’s hats. Only one was copied but several were used to make the hats, meaning that the results were a little sparse. That was solved by simply cranking the Hair Density Setting to 300,000 and giving the hair a slight ‘Pull Down’ of 0.00008.
Hurrah, a hairy mask…
Now what’s needed is an auto-setup script to run step 3 on any selected prop, even if you’re not in the Hair Room. And a handy library of 50 hair setting presets, each growing on a simple ball prop, to transfer from.
This post is about how to partly fix line-breaks and missing ‘chips’ in Poser 11’s Comic Book Preview lineart renders. Solutions are not being sought here for the purposes of ‘closing gaps’ to enable easy flood-fill colouring in paint software. Since, if you’re doing it right, Poser already gives a ‘colour flats’ render. I’m thus assuming that your basic output layers are colour flats, lineart, and shadows/highlights, each as a separate render from the same scene and ready for a Photoshop Action to composite and tweak. Thus your lineart is on its own layer, making it vastly easier to filter and correct later on.
Here the solutions are being sought simply to have nicer lineart with fewer or no breaks.
Update: the solution. It’s No.1 + a couple of little-known Photoshop features.
Here we see a deliberately extreme example showing the breaks and gaps that can happen in a raw lineart pass from Poser. We’re in Poser’s Smooth Shaded Display mode here, and on B&W in Comic Book Preview, and thus some of the Neal Adams-style inking is coming from shadows cast by one of the two lights. Poser is running in real-time with OpenGL, and it’s all WYSIWYG. Brom’s hair (‘Mature Mark’) is of course completely un-optimised for tooning as yet.
As you can see his nose and eyes may still need some manual clean-up, even when using some of the tricks given below. Manual clean-up of such source linart is not ideal. If cleanup takes just under an hour per page, and you have a 28 page comic, that’s perhaps four days of extra work. Per month, perhaps! Across a six-issue series intended to become a graphic novel that’s… way too much fiddly and non-creative work.
Here are a few of the options I explored…
1. PARTLY WORKS. In Poser, before you render to PREVIEW you click the Document Display Style ‘ball’ again. Then do it again. You’ll notice that each time you click, Poser slightly randomises the lines-joins on the inking, and may also make other lines look better. Nice.
Sadly, it’s not viable to then do six clicks, save a PREVIEW render from each, and then combine them all to ‘fill the gaps’ in Photoshop. I’ve tried it and the accumulated result is just dark and grungy blurgh. We would need a Photoshop plugin able to computationally ‘spot the differences’ between a near identical set of lineart. (Update: the solution is that Photoshop has this natively).
So, ‘click the ball again’ is simple and a neat trick, but there’s also not much this trick can do about the nose when the faces is seen at certain angles (see example above).
However, it does has two advantages:
i) it quickly enables you to at least find a state that offers a balance between ‘good eyes’ and ‘reasonable lack of line-breaks’. Such a balance may be good enough, when mixed into a final complex blend of colour flats, lineart, and shadows. You have to remember that your ‘artist eyes’ see things in the art that the general reading public never even notices, and especially so if the intended audience is under age 12.
ii) it also offers the possibility of making just two near-identical lineart renders, one where the eyes are at their best, and another for the rest of the linework. Then you’d make a loose Lasso selection around the eyes and have a Photoshop action feather, switch layers, ‘paste in place’ and merge, and then drop back down the layer stack to delete the source layer.
iii) there are also more complex ways of using Photoshop in which you effectively just ‘paint in red’ on nice lines you want to keep. But here we’re trying to avoid the whole ‘spend an hour carefully fixing the lineart across six panels per page’ thing.
2. SOMETIMES SUCCESSFUL. Make a suitable fast custom-preset for Poser’s SKETCH renderer, one that gives you lineart with a fat charcoal line when run on b&w lineart. A preset that smurshes away most of the gaps and broken chips. Here’s my “2000AD” custom SKETCH preset at work…
We’d still need to dab some white on the nose and draw in some Dan Dare eyebrows, but it’s nearly there. Like I said above, the hair is completely un-optimised for tooning and we’ve smurshed it to black here. Although the black does then lend itself to raking with a wide white ‘rake’ brush.
Alternatively idea 2 could be emulated by running the lineart through a Photoshop filter, G’Mic filter etc, of a type that also smurshes out most of the breaks and gaps. Partly the success of this will be dependent on your comic’s style.
A sub-option for idea 2 it to simply scale down the inking, via the dial on the Comic Book Preview control panel. With thinner lines you have more leeway when bloating them, either by SKETCH rendering into PREVIEW lineart or via a third-party filter.
3. FAILS. You might think there would be some computational solution by now. CorelDraw does have a tool called “Join Curves” for lineart. Apparently Illustrator can also do that. For those who need free software, Inkscape also has a free inkscape-chain-paths plugin which does much the same thing. However, judging by the Inkscape experience it doesn’t work as hoped, in terms of nicely closing small gaps in vectorised lineart.
4. FAILS. Clip Studio has a feature found at Correction Line > Correct line height. This fattens all lines on lineart from which you’ve ‘knocked out’ white. (That latter option has the deeply un-memorable name of Edit -> Convert brightness to opacity, where it should have a dinky little icon showing a white Mickey Mouse glove throwing a knockout punch). Sadly Correction Line just fattens everything. I’m very much a newbie at Clip Studio, but it appears to me that there’s no way to have Clip Studio do a more refined fattening of the lines. For those who need free software, line fattening is also easily done in Paint.NET via the free Overliner plugin.
5. SUCCESS. I’ve yet to get to this point in my experiments, but from what Sixus1 has said I suspect that one takes a Texture Atlas from a figure + clothes, up-scales it to 8k and starts hand-inking ‘along the edges/seams’. The intention being to give the figure a total ‘hand drawn toon’ makeover, which you can then load back onto the figure at 4096. Possibly body and head are each done separately. That’s my guess, based on what I’ve picked up. You can see this partly happening in this Brian Haberlin screenshot from a while ago now. This is one of his hand-inked faceplates seen in Smith Micro webinar, presumably partly designed to work with and mask Poser’s small lineart breaks and chips.
Like I said, I’ve yet to get to experimenting with this ‘intensive makeover’ approach, but from what I’ve heard on webinars and podcasts that seems to be the gist of the approach. And judging by Brian’s marvellous Poser-made comics, the approach can succeed very well. It looks quite labour intensive at the start, and before you start you’d have to be absolutely sure your ‘runtime bashing’ was finished and you’d devised exactly the character you want for your story. Because changing things later could be difficult.
One other trick learned from Brian Haberlin’s webinars is that it’s possible to emulate inking in Poser by zooming right in and selecting a line of polygons, and giving these a black colour. For those who find they have a persistent ‘break’ on a character’s lineart, even from different angles, this may be worth considering.
You have to do this ‘Haberlin approach’ by hand because there’s as yet no market in figures designed like this, or in off-the-shelf makeovers designed to do this for existing figures. Although if you go look at the Gage fan-art Poser figure on the Forender store, then you’ll get an idea of how attractive this approach might be be made for buyers, and how it would play very nicely with Poser’s Comic Book Preview.
X. A future possibility. Software such as Krita, Clip Studio etc can already successfully flood-fill gapped lineart with colour. Presumably it does this by detecting and forming an invisible shape in order to “hold” the paint. This shape could theoretically be repurposed by future plugin makers to run a dark line along the ‘invisible shape’ outline, which would then be imposed to fill gaps in the original lineart. If the result would be pretty or not is anyone’s guess. I suspect it would be a blobby mess, which itself would then need manual cleanup.
Here’s my revival of a 2010 node-setup for Poser, originally by BagginsBill, and which has seemingly been lost in the 404 mists of the forums. However, it was rescued from oblivion by Infinity10. Infinity had first tried to build his own version in 2010 but, soon after seeing BagginsBill’s better version, built another variant and posted that as a screenshot. This screenshot has survived to 2020, lurking and little-visited in the Renderosity Galleries of all places. Sadly the screenshot lacked vital instructions about how and where to use it.
As you can see, I’ve rebuilt it and tested, and my 2020 screenshot adds guidance. It lets you drop in any square render, and with a few clicks you can mask out the background colour. Not quite as good as iClone, where you just drop a masked .PNG on the stage and it becomes a cutout billboard prop. But what follows is ‘as good as it gets’ in Poser.
1. You first load a “One Sided Square” to the Poser stage (Library | Poser 11 Content | Primitives | “One Sided Square”). Select it.
2. Now switch to the Material Room. Create the setup exactly as seen here. I have not re-named any nodes, except for the one under the loaded image. This is where you pick the background colour from, and as such it has been slightly re-named to serve as a user prompt.
3. The image to be loaded has to be square, whatever the shape of the item within that square. It appears from my tests that Firefly / Superfly do not play nicely with a re-sized “One Sided Square” prop, when it comes to rendering. Although a square image in the “One Sided Square” can be safely scaled up and down as a %, thus…
The two images used for this demo were 1200px and 1800px.
(Some may also want to slightly increase the Value_2 on the Math node connected to the main BUMP input, and also increase the strength of the BUMP, depending on your prop type).
4. The ideal background for a figure with eye whites is probably a green-screen colour, although bright-blue might also work as long as you have no blue on the figure. You can see on the screenshot where you pick the background colour to “knock out”. Click on the colour chip and you get a little eye-dropper tool and then you hover over the mini-image and pick its background colour.
For trees and plants it’s different. White may be best for green trees without white flowers, snow on branches etc. Or black, depending on how dark the tree-bark is. The process doesn’t work with .PNGs rendered by Poser with an alpha, since there’s no colour for the node setup to “grab” onto and knockout.
5. Ok, you got your node setup to work. Save the working setup as a prop to your Primitives folder, for future re-use and for making and saving variants. I suggest the name “Magic Billboard” which is easy to remember. Having such a magic prop means no need for an alpha channel mask. The chain of maths in the nodes is doing the required masking for you.
The path to the image is embedded in the saved prop, so make sure your called images are somewhere stable on your PC — like a dedicated folder down in your runtime. Move or delete the called image, and… the prop will break.
It’s not perfect in terms of rendering. There’s still a touch of green fringing. But it’s quick and such props are anyway likely to be in the far middle-distance or background.
Poser’s SuperFly and Preview render-engines each give a crisp render from this, and the real-time Preview even retains the mask on saving the render to .PNG. Sketch runs fine on such cutouts, interestingly.
The drawback here is that Firefly renders are noticeably fuzzy with this setup, and a green fringing is very noticeable. As yet I’ve been unable to find an additional node chain to add, which might better de-fringe. Possibly this fringing effect was why such a node setup did not catch on in a big way for Firefly users.
But… now we have Poser 11’s SuperFly, which does not have anywhere near the same fringing problem, though a green fringe is still present on some of the finer isolated hair-strands. I’d thus say this sort of prop is going to be most useful for the background of Superfly pictures where you want many people in you scene and they have eye-whites and white teeth etc. Which means you can’t have the node setup knock out white.
On a test of a tree on a white background, a Clamp (see middle node) setting of 1.0 rather than 3.0 removed most of the white fringing. The same trick did not work on bright green. However, there may be a better, fringe-free way for foliage/trees. Read on…
I also found another, somewhat later BagginsBill node setup from 2014. This allows some cool “Hue” tweaks that instantly make standard green trees into Autumn/Fall coloured, Winter coloured, or (by going to 2.25 or 3.00) into “alien planet” foliage. Here’s my 2020 rebuild of that…
We almost lose some definition on fine twigs with this, but also usefully lose the cutout fringing. The “Hue” tweak also opens up subtle effects that might make trees in a forest look a little different to each other, and also seem to fade out in colour as their ranks recede into the distance.
Regrettably this “no-fringe” node setup only works with white, meaning that it can’t be used with people due to their having eye-whites and teeth. Still, if you want to build a super-lightweight library for a forest, cliffscape, cityscape etc relatively quickly, here’s a way to do that. It could allow you to make a picture in one pass, rendering relatively fast.
Theoretically one could also use this “Magic Billboard” to build a re-usable set of comic billboards, done in the same style as the final comic, to provide quick scale-adjustable backgrounds. One might also “double-stack” for comics effects, for instance by precisely placing hand-drawn hatching line-art billboards (with knocked-out white) on top of Poser’s native real-time line-art. I think I’d rather do that by i) drawing new inked textures directly on character skins, or ii) using layers and brushes in graphics software. But you can see how the Magic Billboard could become a possible help for a comics maker — especially those who don’t want to fiddle around with getting and saving out an alpha-mask every time they need a quick billboard made. Because the node setup is giving you the alpha-mask automatically.
If you do want to take masked layers out to composite in Photoshop, there are various methods for getting a PNG render with a transparency mask, as discussed here. The gist of it is:
SuperFly -> Apply “Holdout” node to the main Surface node. GROUND should be visible.
Firefly -> Options -> Render over “Current BG Shader”. GROUND should be hidden.
Preview -> Mask is automatic. GROUND should be hidden.
Sketch -> Not possible, but you can turn off sketching into the background and only sketch into the figure/prop. To mask the sketch render later, also export a Preview render of the same character/prop in the same position, then use a selection of that as a mask in Photoshop etc. Exact registration of the two renders is usually not possible, due to the slight distorting of edges introduced by the Sketch effect.
Extra: A Renderosity forum answer, by hborre in August 2020, was about how to place renders onto 3D primitives. Which seems to relate to putting images on planes. This might be useful to someone in the future…
Prop scale and image resolution: Find out your image’s x- and y- pixel resolution first (i.e. 2000 X 1000). In Poser, set the x- and y- scale for your primitive to the same dimension, in this case, 2000% on the X- scale, 1000% on the Y- Scale. Import your image to the primitive. Then, on your overall scale for the primitive, dial in your desired size.
There’s a new 70-minute series on YouTube from Tony Vilters, Poser2Blender. It appears to be of use to figure-based content-makers, rather than those wanting to get a scene to Blender for NPR rendering. But there are also some interesting observations on how Poser exports .OBJs files for figures.
I was pleased to find a new YouTube video from the 3D Comic Creator, “DAZ Studio To Blender To Freestyle For Inking Your Comic”. He shares an in-depth 90-minute workflow for taking a DAZ Studio character into Blender and wrangling Freestyle lineart onto it. It’s clearly explained and he’s a good presenter.
I’m not sure I’d want to go through all that pain and intense fiddly-ness, though. Just to get what Poser 11’s Comic Book mode / Sketch Designer can output ‘at the drop of a hat’ in real-time. But it’s interesting to see how the most advanced DAZ users are trying to make artwork for comics. And as Freestyle is currently something of a “moving target”, along with the rest of Blender, there’s always the possibility that Freestyle and other NPR aspects of Blender will start to become easier and more automatic to use, in future. It’s very early days, but a project called BEER intends to try making NPR easier to get from Blender. Another hopes to add an easy SketchUp-like ‘sketched outlines’ capability to the default install of Blender (something you can only do with a paid plugin at present). Although both have very primitive demos at present.
You remember those “whiteboard animation” videos, in which a hand super-quickly drew a sketch, words get laid down, while there’s a voiceover? They were the ‘hot new thing’ circa 2014, and generally now go by the name of “explainer videos”.
Production of them is now a Cloud or tightly Cloud-locked subscription thing, and there appears to be no desktop-only software worth having for making them quickly and easily. The leading $40+ a month names are Sparkol VideoScribe Pro and Easy Sketch Pro, among others. There’s obviously a lot of money in such services, and the Web is very intensively astro-turfed with page after page of spam and misleading marketing on such things. It’s almost impossible to find reliable information. Anyway, they exist, and the market leader VideoScribe has impressive capabilities, yet is fairly simple to use.
Their lustre has faded, as a media form. ‘Explainer videos’ were very hot in 2015-16 as we came out of the Great Recession, but they became over-used for mundane purposes — often purchased off the shelf for $20 from quickie providers in the back-streets of India, via Fivver. Such indifferent use has turned them into the humdrum Powerpoint slides of 2020. Meaning a superficially fancy presentation of fuzzy or half-baked ideas, done in a manner that’s then difficult to question or challenge. Thus making the format one that people now wince at, when they see it hove into view in a business meeting, teachers’ meeting or in a marketing context.
But that doesn’t have to be the case, and with a good story to tell and some creative flair they still have a place in education, especially for children and in lower-level work training. That set me wondering about how one would get a Poser line-art render to animate as if it was being drawn line-by-line by a human hand. Being able to output such a thing might be an attractive feature for Poser 12.
As first I wondered if Smith Micro’s MotionArtist could do this sort of “reveal a drawing effect”. Nope, seems not. Reallusion’s Cartoon Animator? Nope. You’d think that such a sales-worthy feature would be a natural one to add, but it’s not.
What about the vector tools? Surely there’s an Inkscape plugin? Nope. Clip Studio? Nope, seems not. Paint.NET? Nope.
Then I thought about Poser’s ability to output a Corel Painter script, for playback in Painter. But my in-depth look at that nearly forgotten Poser feature shows… that it does not actually lay down “strokes and lines”, whatever the end result may appear to be. Like I said above, a lot of this stuff is in the realm of “smoke and mirrors”, in terms of how it’s actually done vs. what it actually looks like.
But could a Python script in Poser go through a figure and selectively turn off the Geometric Edge line, stepping through the body parts according to a set “feet to head” list, and saving to each step to a movie-frame as it went along? That would give a certain effect, but it might look a little weird in terms of not looking “hand drawn” when played back. For instance, the long leg lines would be drawn in “all at once”. One would have to also have the script generate shaped white geometry at certain co-ordinates (placed in front of the leg) to prevent the complete line from being seen all at once. It’s a very complicated possibility, for someone with a few weeks to spare and ninja Python coding skillz, but it doesn’t seem likely to happen.
Perhaps then what’s needed is for Poser 12 to whip up some maths that saves Comic Book Preview edge-inking lineart to SVG, and have that SVG embed pseudo “pen stroke” information based on mesh names. For instance, tell the SVG that: this set of ink line come from the head mesh | therefore when drawing = reveal main head outline first | then eyes, nose, mouth | then neck | then reveal hair and hat lines. Or: this leg line is a long line from the leg mesh | therefore drawn it bit by bit | first to that on one side of the leg and then the other. It would probably still not look convincing.
The other way might be an AI that “knows” about the order of head, hands feet, eyes, etc. It would look at any vectorised Poser lineart and identify the body parts and it then “knows” the order in which a human would draw them, and how. It then saves a new SVG with that drawing information embedded it it. Clip Studio already has AI pose recognition, which transfers the pose from a photo to a 3d figure, so it’s not going to be impossible in future. It’s probably the future of this sort of thing, but it’s still some way off.
Alternatively, if you like the Poser Comic Book inking/rendering style and want to keep it, the best option can be simply to import your lineart to VideoScribe. It will actually automatically vectorize and you then choose the reveal style…
The bottom-left one is the best. It might appear that VideoScribe is preparing to draw ugly blodgy lines over your lovely lineart, but that is not the case. What it’s showing is where your lines will be revealed, not drawn.
What appeared to work best for this import was a 600px PNG with a plain background. On playback the hand and pen darts about all over, since there is no “order of laid-down lines” to follow, but if you set the draw time to sub 5-seconds then the flickeriness is not going to be too wearing on the audience (though a few may be on the floor having a flicker-induced epileptic fit). It’s also possible to remove the ‘drawing hand’ or just use a pen-nib instead. For reveal times, the best VideoScribe scene/video settings are said to be…
Another possibility is that you vectorise in Inkscape, set a new top layer, then quickly paint over it with a brush in an approximation of hand sketching. You then make these drawn lines fat enough to cover the lower drawing, and set the top layer to have an opacity of zero. When brought into VideoScribe, you can apparently tell the reveal “hand” to follow only the lines on the top layer, thus cunningly revealing the already-done layer beneath. That would given you a more realistic “drawing by hand” effect, on playback. Like I said, it’s all “smoke and mirrors” in this corner of graphics-world.
Here’s a demo of Poser 11’s Sketch in ten examples. We start with the new free Aiko 3 character Dody under a relatively flat IBL light, with some real-time Comic Book inking applied by Poser 11. Background and Ground are not visible, as they would be rendered separately. There are no shadows here, but they would not slow down the speeds. It would be easy enough to also render a shadows pass for blending, but the intention here is to show what one can do without multipass renders and without any compositing of render layers. Also without any gamma-lift of grungy textures, such as are seen on the hat. Render times are to 1800px, on a normal domestic PC and not on my workstation.
We start with a 1800px render of a straightforward portrait style picture. A real-time Preview render to 1800px, at a fraction of a second. Here reduced to 1200px.
The same real-time Preview render to 1800px, at a fraction of a second. The only difference is the Preview textures are now at 1024px, and there’s more texture detail on the hat. Here reduced to 1200px.
Here my custom Sketch preset “Charcoal Happens” is run on the top scene. 2 seconds. As you can see, it’s something like ink lineart, but in charcoal, and with fading on the hair. Perhaps a little too close to what one might get from a G’MIC filter, but useful for blending.
My “Mangalish” Sketch preset runs on the top scene. 3 seconds. The hat is not exactly zip-toned, but the hat material has not been adjusted/replaced, nor the gamma changed on the starting scene.
My “Pretty Skinny” Sketch preset. 5 seconds. I got distracted and attempted to make a ‘skin extractor’ using Poser’s Sketch engine. Could be useful but requires the hair and hat to be dark. Which is not impossible, re: my new Python script to do that.
My “Graphic Painting” preset Sketch render. 20 seconds. Reliably gives a somewhat more hand-painted look to the straightforward Comic Book Preview.
My “Arthur Rackham” Sketch preset, aiming for the feel of a mass-printed Edwardian book illustration. 10 seconds. The hat is given ugly whorling that will not filter well in Photoshop filters and will get worse and scream “naff Photoshop filter!”. That would need to be fixed, either by a gamma-lift or a new texture.
My “Painterlish” Sketch preset. 4 seconds. The sort of render you might want if you then wanted to start overpainting manually.
My “Graphic Painting” Sketch render here goes through a 30 second Photoshop filter. Not bad, but not a patch on the sort of convincing hand-painted watercolour look that can be done with multipass renders and layer blending and per-layer filtering.
Now colour is turned off and the scene has gone to pure Comic Book Preview inking of the lines. No Sketch rendering is happening here, it’s pure Comic Book Preview. A microsecond, when rendered. Such real-time lineart renders can be blended in many ways, to add to a hand-made look.
As you can see, all of these have speed. The myth is that Poser’s Sketch renderer is very slow, and that doing NPR with Poser “takes hours” only to produce a blodgy mess. As you can see above, that doesn’t have to be the case. Poser’s Sketch Designer can make presets almost as near real-time as the Comic Book Preview rendering, even at 1800px or 3600px. The first step is to turn off the Background in the Sketch Designer, so you’re only Sketching ‘into’ the character / clothes / props. The Opacity slider is then the other main speed killer, and the smaller the value the slower the speed. The other things to watch for are that i) presets are render-size specific and ii) run differently depending on whether the Comic Book mode colour switch is “on” or “off”.
Poser 11 saves Custom Sketch presets to one of its obscure hidey-holes at C:\Users\YOUR_USER_NAME_HERE\AppData\Roaming\Poser Pro\11\SketchPresets Be sure to back up this and its higher folders before moving to a new version of Poser, if you’ve been making custom Sketch presets.