This is a little bit folly IMO. A mouse and keyboard provide a less fatiguing and more precise mechanism for the type of 3D that Blender can do. What I imagine they will find is that the act of "tabletizing" Blender will force it to simplify its workflows. You can already see it in the screens. This will downgrade the power that Blender has, confuse the existing users, and generally serve nobody best.
If they insist on doing it, it would be a good idea to rebrand it so expectations are in line with what a tablet interface enables/prevents.
It's like saying "People with their mouths full of food couldn't perform Shakespeare so we translated Hamlet to a maximum of 2 syllable words. Now everyone can perform Shakespeare."
> To support Blender’s mission of making 3D technology accessible to everyone
If accessibility is a priority for Blender, then they should absolutely be trying this. This isn’t going to be taking away the keyboard/mouse control that currently exists. The point is to give people, who (for whatever reason) can’t use a keyboard and mouse, a tool that they don’t currently have access to. There is also a large segment of the younger user base whose primary interface to computing is a tablet. This has the potential to open a whole new market of users for Blender.
Give them a little credit… I don’t think Blender is going to “downgrade” their existing workflows. For this tablet/pen project, who knows what kind of UI/UX they will have - it could be great. Plus, it is important for a project like Blender to have the freedom to experiment, otherwise you end up with a static ecosystem.
But, honestly, why wouldn’t you want Blender to make 3D work available to others who prefer to work with a different set of input devices? If that tool ends up as a “Blender Lite”, who cares? It may not be useful to you, but it will be useful to someone else. And maybe they find a new feature that will be useful to you in the process.
Removing/degrading the UX for keyboard and mouse users would be (professional) suicide for Blender, I'm 99% confident Ton would never do something like that, unless most of the user base suddenly find themselves using tablets exclusively. And even then I'd doubt he'd leave us desktop users behind.
I don't know about this. Compare Blender 2.8 to 2.7. A big part of the ui revamp was exposing formally menu/hotkey only stuff into the viewport UI. Things like selecting which orthographic view angle needed the numpad before, you can still do that but they added a nice minimal GUI for doing it too. Things in general got easier with the mouse, but it didn't take away from the powerful vim-like keyboard controls.
I'm not sure exactly what that means, intuitively I agree with your comment, but I guess the ZBrush port acts as a data point that's at least a partial counterargument (only partial because, e.g., Maxon hasn't ported Cinema 4D, which is more directly analogous to Blender).
I'm really curious about the ZBrush port, e.g., is that really because customers were asking for it (or usage data from Maxon's own iPad-only [acquired] app Forger indicated there was interest)? Or is it a defensive move trying prevent an Innovator's Dilemma-style disruption from the bottom from an app like Nomad Sculpt https://nomadsculpt.com?
I'm building my own polygon modeling app for iOS as a side-project [0], so I feel a bit conflicted.
Getting fully featured Blender on the iPad will be amazing for things like Grease Pencil (drawing in 3D) or texture painting, but on the other hand my side-project just became a little bit less relevant.
I'll have to take a look at whether I can make some contributions to Blender.
There’s also the excellent Nomad Sculpt, which while not a mesh editor, is an incredibly performant digital sculpting app. Compared to Blender’s sculpt workflow it maintains a higher frame rate and smaller memory footprint with a higher vertex count. Of course it’s much more limited than Blender but its sculpting workflow is much better and then you can export to Blender.
There is room for more than one modeling app on iOS as long as you can offer something that Blender doesn’t, even if it’s just better performance.
Just for reference I’m running 4.5 with the Vulcan backend and sculpting a 6.3 million vertex object completely smoothly and Blender is using 4 GB of RAM and 2.5 GB of video RAM. Granted my system has a 9800 x3D and a 5070ti.
One thing Blender lacks is easy 3D texture painting. As far as I know, neither is there a decent 3D texture painting iPad app. Definitely a gap in the market.
Yes… I have seen this before and played with it. This was a good attempt at emulating the behavior of an app like Substance Painter. However… the core problem is that in order to paint textures you need very complex and deep functionality. When using Substance, I have to variously consider: the texture channels (e.g. color, roughness etc), the many layers that may serve these channels, the baked texture channels (e.g. ambient occlusion, normals etc), the many blend modes, masks and adjustments that serve and interconnect all these.
I doubt that anything other than native blender functionality could serve all this with any elegance.
I teach substance painter and it does a good job and hiding all this complexity until you need it. It is very common for students to underestimate it as an app… to view it as just photoshop for 3D.
Yes. And you can paint textures too with Procreate. But like a said, not as advanced as Substance painter and others. It might still be enough for some use cases.
To cheer you up, in my experience over the existence of the App Store, anytime something like this comes to the Store is a big win for independent side projects.
Your project might possibly be way cheaper and solve a specific problem, so it would benefit from the awareness that Blender's large marketing footprint would inevitably leave behind ;)
Keep building!
I'm currently in contact with the Blender team to see where I could contribute, but you're right that there is space for multiple projects.
I think I'm going to focus more on CAD / architectural use cases, instead of attempting feature parity with Blender's main selling points (rendering, hard-surface modeling, sculpting).
Having been in the app development game for a long time, I know the feeling but have also learned to realize that this is actually not a negative; it means there's a strong signal that there's a desire for 3D apps on these touch devices. Competition can be really good. And your app has the ability to be more focused vs a legacy app that has to please a very large user-base who've come to expect it to behave a certain way.
Well, funny enough I'm actually working on a visionOS sculpting app at the moment! It's metaballs based, and kind of a different vibe / niche than what you're going for.
Eg, more like a sketch than asset production, and metaballs-focused which obviously is going to create a very different topology than NURBS, etc.
My first app was Polychord for iPad which is a music making app that came out in 2010.
These days Vibescape for visionOS is a big one for me, and there are others. I also worked in the corporate world for about a decade working on apps at Google, etc.
On the contrary, your project just became even more relevant. Blender badly needs an alternative/competitor. Everybody loses if a single project dominates.
Blender has a ton of competitors. They're all commercial and have corporate backing. If anything, blender is the "little guy". It is utterly amazing what Ton has managed to do with Blender.
I would think Maya is the most influential of all of them. Blender is popular among hobbyists and people who aren't able to shell out a few thousand every year, but Maya dominates in the commercial world. Plus many animators are using Unreal Engine just for traditional animation now
Blender is absolutely an underdog in commercial studios. It is used, but it’s the minority tool for professional settings. There are still several areas blender is lacking compared to maya or 3dsmax.
Blender is nothing like chromium. It's not made by a big company, it sprung up in an extremely for-profit niche (and it has like 4 serious competitors that are all actively in use)
I didn't know about Feather 3D, but it looks super aesthetically pleasing. I'll have to try it out.
I tried uMake a while back, but found the 3D viewport navigation a bit hard to use, and would often find out I had been drawing on the wrong plane after orbiting the camera.
After using something like Tilt Brush in VR, it's hard to go back to a 2D screen that doesn't instantly communicate the 3D location of the brush strokes you're placing.
I haven't done much designing. Starting with a cube and sculpting, transforming seems the Blender's approach. Are there any other approaches for designing 3-D shapes and assembling them?
Basically, Blender says "start with a cube". I want to ask why and what are the other options.
Blender doesn't say "start with a cube". Its default "General" scene has a cube. You can add whatever mesh, text, curves, metaballs, grease pencil, volumes, etc you want.
As far as workflows go, there are far too many to list, and most artists use multiple, but common ones are:
* Sculpt from a base mesh, increasing resolution as you go with remeshing, subdivision, dyntopo, etc.
* Constructively model with primitives and boolean modifiers.
* Constructively model with metaballs.
* Do everything via extrusion and joining, as well as other basic face and edge-oriented operators.
* Use geometry nodes and/or shaders to programmatically end up with the result you want.
Finally, since we're a lot of programmers here, all of Blenders functionality is exposed as a Python API, that can be run directly from within Blender or as an addon/extension. Handy when you're doing a bunch of repetitive stuff. https://docs.blender.org/api/current/index.html
I think "hard surface modeling" is a keyword you might want to look up. Also look into traditional CAD and "parametric design". All of these have considerable overlap and are focussed on mostly "boxy shapes" with some "free form surfaces" added.
On the other end of the spectrum you have "sculping" for organic shapes and you might want dig into the community around ZBrush, if you want to fill the gap from "start with a cube" to real sculpting.
More niche, but where I feel home, is the model with coordinates and text input approach. I don't know if it has a real name, but here istwhere is pops up and where I worked with it:
- In original AutoCAD (think 90s) a lot of the heavy lifting was done on an integrated command line hacking in many coordinates
- POVRay (and its predecessors and relatives) has a really nice DSL to do CSG with algebraic hypersurfaces. Sounds way more scary than it is. Mark Shuttleworth used it on his space trip to make a render for example.
- Professionally I worked for a couple of years with a tool called FEMAP. I used a lot of coordinate entry and formulas to define stuff and FEMAP is well suited for that.
- Finally OpenSCAD for a contemporary example of the approach.
- sculpting. You start with a high density mesh and drag the points like clay.
- poly modeling. You start with a plane and extend the edges until you make the topology you want.
- box modelling. You start with a cube or other primitive shape and extend the faces until you make the shape you want.
- nurbs / patches : you create parts of a model by moving Bézier curves around and creating a surface between them
- B-reps / parametric / csg / cad / sdf / BSP : these are all similar in that you start with mathematically defined shapes and then combine or subtract them from each other to get what you want. They’re all different in implementation though
- photogrammetry / scans : you take imagery from multiple angles and correlate points between them to create a mesh
There are many approaches. For example, constructive solid geometry, where you use boolean operations on simple 3D shapes to create a more complex object. Another one is the extrusion of 2D shapes, which adds volume to 2D shapes.
My main tool is OpenSCAD, as I am a programmer and I model for 3D printing, which support both CSG and extrusion, but not the mesh manipulation like Blender.
One paradigm is not better than the other. Blender is nice for modelling, for example, characters, but it's really painful for doing precise shapes. For modelling multi-part action figures for 3D printing you'll need both things, sometimes modelling something in a tool and finishing it in other
For skulpted works (best for organic and free-form style character and object designs) starting with a "clay sphere" is also common. In terms of app for the iPad: Nomad Skulpt is well favoured there.
Other popular approaches to designing shapes are using live addition/subtraction, Nomad supports that as well, but it's not as intentional as say 3D Studio Max (for Windows) - where animating those interactions is purposely included as their point of difference.
There's also solid geometry modelling, this is intended for items that would be manufactured, for that SolidWorks is common.
Then finally you have software like Maya which lets you take all manner of approaches, including a few I haven't listed here (such as scripted or procedural modelling.) The disadvantage here is that the learning curve is a mountain.
And if you like a bit of history, PovRay is still around where you feed it text mark-up to produce a render.
I'm curious about this too. On the engineering side, tools like Solidworks and Fusion start from extruding a planar sketch— which is a model that maps well to conventional manufacturing techniques, but isn't very artistic.
Of note, there is an interesting parallel between this sketch/extrude/cut approach that Solidworks uses, and the Brush approach you used to use to make Half-Life and Source engine maps. (I don't know what the current tools, e.g. for Unreal use).
Open a new blender file and select Sculpt and you start with a high poly sphere. The default cube is not suitable for sculpting unless you subdivide it several times.
You can create whatever start up file you want.
Other approaches include subsurface workflows, metaballs, signed distance functions, procedural (geometry nodes) and grease pencil.
This is excellent news. So many artists are now using procreate on iPad Pros as their primary platform. I do not miss the days of using puppet to juggle the configs of various overly expensive and user hostile dcc software. The barrier to entry used to be so high for designers.
I teach digital painting, and Procreate is slowly becoming my enemy. I fully appreciate its ease of use, its fantastic union with Apple Pen and certainly my students love it. But doing design/creative work on a small screen is not healthy, especially for complex images. neither is it easy to maintain a complex workflow, such as that required by matt painting and multi-layer compositing. Also, any presence of tablets in a design teaching lab is never pretty... I can't easily review their files or integrate their output into a pro desktop app.
Cost is likely another big reason it’s popular with students. A $13 one time purchase is hard to beat… even with edu pricing Adobe CC quickly gets more expensive. Clip Studio Paint falls somewhere in the middle.
I really want to be able to pencil write on the coding apps on the go, now that handwritting reckognition has gotten good enough, but so far most of them provide a very lame experience.
It is like people don't really think out of the box on how to take advantage of new mediums.
Same applies to all those AI chat boxes, I don't want to type even more, I want to plain talk with my computer, or have AI based workflows integrated into tooling that feel like natural interactions.
I think it just depends on preferences. For me typing is significantly more "natural" than writing with a pen/pencil, which I've probably resorted to less than once a month over the last few years.
Regardless of the preference, the tools should provide both workflows optimally, unfortunately this is yet another example where iOS/iPadOS world is leading, where on Android I can hardly recommend something really usable.
> The initial platform where this idea will be tested is the Apple iPad Pro with Apple Pencil, followed by Android and other graphic tablets in the future.
Asus has been releasing year after year two performance beast, with a very portable form factor, multi-touch support and top of the line mobile CPUs and GPUs: the X13 and Z13
Considering the Surface Pro line also gets the newest Rizen AI chips with up to 32Gb of RAM, having them as second class citizen is kinda sad.
PS: blender already runs fine as-is on these machines. But getting a new touch paradigm would be extremely nice, and would be a better test best than a new platform IMHO.
And I fumbled the CPU attribution for the Surface devices, they are of course Intel processors (Intel Core Ultra Series 2) for business. The customer line stays on Snapdragon.
I used some build of blender on windows mobile, crazy how efficiently it worked on 400Mhz HTC Niki, just the screen size, so the performance part seems to be done since forever :D
Replying mainly to the title - but I'm surprised VR based 3D modeling never took off. I've only dabbled with blender, I know it's a powerful tool, but the learning curve just to navigate the interface is steep - compared to some creativity VR programs which felt instantly intuitive for 3D modeling. I guess for a professional digital artist, having fine technical control of your program is more important.
The biggest holdup for VR sculpting is you have nowhere to rest your hands or tools. In a physical medium you can rest your weight and tools against the clay, glass etc. that you're working with.
This is part of the reason why high end 3d cursors have resistive feedback, especially since fine motor control is much easier when you have something to push against and don't have to support the full weight of your arm.
Maybe the problem is just impossible or maybe AI assistance will solve it but it's crazy to me how complex 3d software like Blender/Maya/3DSMax/Houdini etc still are. There are 1000s and 1000s and 1000s of settings, Deep hierarchies of complexity. And mostly to build things that people built without computers and only a couple a few tools in the past. I had hoped VR (or AR) might some how magically make this all more approachable but no, it's not much easier in VR/AR. The only tool I've seen in the last 30 years that was semi easy was the Spore Creature creator though of course it had super limits.
I guess my hope now is that rather than select all the individual tools I just want AI to help me. At one level it might be it trying to recognize jestures as shapes. You draw a near circle, it makes it a perfect circle. Lots of apps have tried this. unfortunately they are so finicky and then you still need options to turn it off when you actually don't want a perfect circle. Add more features like that and you're back to an impenetrable app. That said, maybe if I could speak it. I draw a circle like sketch and say "make it a circle"?
But it's not just that, trying to model almost anything just takes soooooooooo long. You've got learn face selection, vertex selection, edge selection, extrusion options, chamfer options, bevel options, mirroring options, and on and on, and that's just the geometry for geometric based things (furniture, vehicles, buildings, appliances). Then you have to setup UVs, etc.....
And it gets worse for characters and organic things.
The process hasn't changed significantly since the mid-90s AFAICT. I leared 3ds in 95. I learned Maya in 2000. They're still basically the same apps 25-30 years later. And Blender fits right in in being just as giant and complex. Certain things have changed, sculpting like z-brush. Node geometry like Houdini. And lots of generators for buildings, furniture, plants, trees. But the basics are still the same, still tedious, still need 1000s of options.
Couldn't this be said for any creative software, e.g., After Effects, IDEs, DAWs, NLEs, Photoshop, Illustrator, etc... and even a few non-creative applications like Word and Excel? (To your point, I do think 3D modeling software is the most complicated of all of these, but I think as a general rule, the complexity is more similar between these software categories versus other categories like Mail, Notes, etc...)
For my part, I think this is because to do good creative work you need minute control, and minute control essentially just means adding lots of controls to software. Sure you can mediate this with AI, the learning curve especially, but that only really helps unskilled workers and hinders skilled workers (e.g., a slider is a more efficient way for a skilled user to get an exact exposure, rather than trying to communicate that with an AI). And I don't really think there's a need for software where unskilled users have a high-degree of control (i.e., skill and control are practically synonyms).
I see your point. Yes, all those types of software are complex. And, maybe that's just the way it has to be. But, take the non-software versions. Sculpting a bust out of clay is certainly a difficult skill, but it doesn't take 3000+ hierarchical options. All it takes is a pile of clay and 2 hands. Maybe add a 1 to 5 tools. Similarly, building lots of wood furniture is a skill but doesn't take 3000+ hierarchical options. It takes just a few tools. Drawing a diagram on paper (Illustrator) takes a pencil, a ruler, maybe a compass.
I just feel like there's some version of these tools that can be 100x simpler. Maybe it will take a holodeck to get there and AI reading my mind so that I knows what I want to do without me having to dig through 7 levels of menus, sub sections, etc....
All of these custom hardware interfaces accomplish the same thing: They make using the software more tactile and less abstract. Meaning you replace abstract-symbol lookup (e.g., remembering a shortcut or menu item) with muscle memory (e.g., playing a chord on a piano).
So TLDR, the reason that we don't have what you're looking for is that we don't have a good way to simulate clay and wood as hardware that interfaces nicely with software.
Note there's a larger point here, which I think is more what you were getting at. I think people sometimes expect (and I expected this when I was younger), that computers could invent new better interfaces to tasks (e.g., freed from the confinements of physics). Now I think this is totally the opposite, that the interfaces are usually better from the physical world (which makes sense if you think about it, often what were talking about are things that human beings have refined over thousands of years), and that enforcing the laws of physics usually actually makes things easier (e.g., we've been dealing with them since the moment we were born, we have a lot of practice).
Finally also note that custom hardware interfaces only tend to help across one axes (e.g., a MIDI controller only helps enter notes/control data). The software still ends up being complex because people also want all the things computers are good at that real world materials aren't, like redo/undo, combining back together things that have been broken apart, zooming in/out, seeing the same thing from several perspectives at once, etc...
PS I don't even know if the Holodeck or mind-link up would really help here, it's possible, but it's also possible it just difficult for our brains to describe what we want in a lot of cases. E.g., take just adjusting the exposure, you can turn it down, oh but wait I lost the violet highlight that I liked, how can I light this scene and keep that highlight and make it look natural. I don't know maybe this stuff does map to Holodeck/mind-link, but it's also possible that just having tons of options for every light really is the best solution to that.
> It's begging for disruption to something easier.
This assumes the complexity is incidental rather than inherent. I think the problem is similar to how Reality has a surprising amount of detail[1]. AI will likely eventually make a dent in this, but I think as an artist you tend to want a lot of granular control over the final result, and 3D modeling+texturing+animation+rendering (which is still only a subset of what Blender does) really does have a whole lot of details you can, and want to, control.
Having a UI/UX for tablets is awesome, especially for sculpting (zBrush launched their iPad version last year - but since Maxon bought it, all is subscription only).
I joined the Blender development fund last year, they do pretty awesome stuff.
Based on the title I expected Blender to present their own programming language, something like OpenSCAD. But then they started talking about multi-touch interfaces..
Anyone else having issues with the videos playback? They don’t play at all on iPhone. Interaction design part is the most interesting for me, I’m curious to see what the team has come up with.
Blender already has "NDOF" settings for use with space mouse and similar tools. I'm no Blender expert, but those are indispensable as a user coming from a CAD background.
Speaking of interfaces, when will we have one that works just by thinking—something less intrusive than Neuralink—that lets us control not just Blender, but the entire computer? I think my productivity would increase a lot...
I worked in non invasive BCIs for a couple of years (this was about 7 years ago). My current horizon estimation for a “put a helmet and gave usable brain computer interface” is never.
With implants, we are probably decades away.
What currently works best is monitoring the motor cortex with implants as those signals are relatively simple to decode (and from what I recall we start to be able to get pretty fine control). Anything tied to higher level thought is far away.
As for thought itself, I wonder how would we go about it (assuming we manage to decode it). It’s akin to making a voice controller interface, except you have to tell aloud everything you are thinking.
Have you kept up with recent ML papers like MindEye, which have managed to reconstruct seen images using image generator models conditioned on fMRI signals?
Ever since that paper came out, I (someone who works in ML but have no neuroimaging expertise) have been really excited for the future of noninvasive BCU.
Would also be curious to know if you have any thoughts on the several start-ups working in parallel on optimally pumped magnetometers for portable MEG helmets.
> Have you kept up with recent ML papers like MindEye, which have managed to reconstruct seen images using image generator models conditioned on fMRI signals?
Not really. I left the field mostly because I felt bitter. I find that most papers in the field are more engineering than research. I skimmed through the MindEye paper and don’t find it very interesting. It’s more of mapping of “people looking at images in a fMRI” to identifying the shown image. They make the leap of saying that this is usable to detect actual mind’s eye (they cite a paper where they requires 40 hours of per-subject training, on the specific dataset) which I quite doubt. Also we’re nowhere near having a portable fMRI.
As for portable MEG, assuming they can do it: it would be indeed interesting. Since it still relies on synchronized regions I don’t think high level thinking detection is possible but it could be better for detecting motor activity and some mental states.
I know this is the classic eye-roll question, but is support planned for linux/desktop devices? I imagine the future android app could be used via waydroid but seeing how VLC could bridge the gap, perhaps?
I have a small touchscreen linux device I use to view HN via 4g, it is a umpc laptop from donki called the nanote next, using the giant blender interface would be greatly enriched on that tiny device if I were to use an android experience.
The Wacom kernel drivers are so nice, especially with the neat little interface GNOME has in the settings. I got a secondhand Wacom tablet from 2002 at a garage sale that serves it's duty signing PDFs and sculpting in Blender on those rare occasions where it's needed.
Makes me wonder if anyone's playing osu! on their Steam Decks...
This is a little bit folly IMO. A mouse and keyboard provide a less fatiguing and more precise mechanism for the type of 3D that Blender can do. What I imagine they will find is that the act of "tabletizing" Blender will force it to simplify its workflows. You can already see it in the screens. This will downgrade the power that Blender has, confuse the existing users, and generally serve nobody best.
If they insist on doing it, it would be a good idea to rebrand it so expectations are in line with what a tablet interface enables/prevents.
It's like saying "People with their mouths full of food couldn't perform Shakespeare so we translated Hamlet to a maximum of 2 syllable words. Now everyone can perform Shakespeare."
> To support Blender’s mission of making 3D technology accessible to everyone
If accessibility is a priority for Blender, then they should absolutely be trying this. This isn’t going to be taking away the keyboard/mouse control that currently exists. The point is to give people, who (for whatever reason) can’t use a keyboard and mouse, a tool that they don’t currently have access to. There is also a large segment of the younger user base whose primary interface to computing is a tablet. This has the potential to open a whole new market of users for Blender.
Give them a little credit… I don’t think Blender is going to “downgrade” their existing workflows. For this tablet/pen project, who knows what kind of UI/UX they will have - it could be great. Plus, it is important for a project like Blender to have the freedom to experiment, otherwise you end up with a static ecosystem.
But, honestly, why wouldn’t you want Blender to make 3D work available to others who prefer to work with a different set of input devices? If that tool ends up as a “Blender Lite”, who cares? It may not be useful to you, but it will be useful to someone else. And maybe they find a new feature that will be useful to you in the process.
My take has some nuance. Nothing against making an accessible 3d app. If they call it Blender Lite or Tablet Version, great.
But they gave the impression it would be Blender.
The blog post says they'll be using Application Templates to implement this:
https://docs.blender.org/manual/en/latest/advanced/app_templ...
So touchscreen support will be like a mode you can switch on. I don't think desktop users will be affected.
Until they find it too tiresome to support two different interfaces and deprecate one or the other
Removing/degrading the UX for keyboard and mouse users would be (professional) suicide for Blender, I'm 99% confident Ton would never do something like that, unless most of the user base suddenly find themselves using tablets exclusively. And even then I'd doubt he'd leave us desktop users behind.
Off topic, but linguist John McWhorter has an interesting idea about actually translating Shakespeare into modern English: https://www.americantheatre.org/2010/01/01/its-time-to-trans...
I don't know about this. Compare Blender 2.8 to 2.7. A big part of the ui revamp was exposing formally menu/hotkey only stuff into the viewport UI. Things like selecting which orthographic view angle needed the numpad before, you can still do that but they added a nice minimal GUI for doing it too. Things in general got easier with the mouse, but it didn't take away from the powerful vim-like keyboard controls.
Maxon ported ZBrush to iPad (released last year) https://www.maxon.net/en/zbrush-for-ipad
I'm not sure exactly what that means, intuitively I agree with your comment, but I guess the ZBrush port acts as a data point that's at least a partial counterargument (only partial because, e.g., Maxon hasn't ported Cinema 4D, which is more directly analogous to Blender).
I'm really curious about the ZBrush port, e.g., is that really because customers were asking for it (or usage data from Maxon's own iPad-only [acquired] app Forger indicated there was interest)? Or is it a defensive move trying prevent an Innovator's Dilemma-style disruption from the bottom from an app like Nomad Sculpt https://nomadsculpt.com?
I see what you are saying but:
I think it’s really cool for Blender to be experimenting with UX given its incredible stature as an OSS project.
If Blender does this well it could change the landscape and culture of OSS.
Of course there are risks but fortune favors the bold.
Making a 3d app on tablet with touch is fine. Find the sweet spot and make the best thing for the interaction and the users.
As long as they are honest with themselves and the users about what it is capable of, great.
But it feels like some UX people got the steering wheel and are taking everyone on a joyride.
I'm building my own polygon modeling app for iOS as a side-project [0], so I feel a bit conflicted.
Getting fully featured Blender on the iPad will be amazing for things like Grease Pencil (drawing in 3D) or texture painting, but on the other hand my side-project just became a little bit less relevant.
I'll have to take a look at whether I can make some contributions to Blender.
[0] https://apps.apple.com/nl/app/shapereality-3d-modeling/id674...
There’s also the excellent Nomad Sculpt, which while not a mesh editor, is an incredibly performant digital sculpting app. Compared to Blender’s sculpt workflow it maintains a higher frame rate and smaller memory footprint with a higher vertex count. Of course it’s much more limited than Blender but its sculpting workflow is much better and then you can export to Blender.
There is room for more than one modeling app on iOS as long as you can offer something that Blender doesn’t, even if it’s just better performance.
How does it compare to blender 4.5 with Vulcan enabled?
Just for reference I’m running 4.5 with the Vulcan backend and sculpting a 6.3 million vertex object completely smoothly and Blender is using 4 GB of RAM and 2.5 GB of video RAM. Granted my system has a 9800 x3D and a 5070ti.
One thing Blender lacks is easy 3D texture painting. As far as I know, neither is there a decent 3D texture painting iPad app. Definitely a gap in the market.
This might scratch that itch? (not really my domain, but I happened to see this recently, so my apologies to you if it's not what you mean)
"UberPaint: Layer-based Material Painting for Blender (PUBLIC BETA)" https://theworkshopwarrior.gumroad.com/l/uberpaint
https://www.youtube.com/watch?v=meX3cbtdVbI
Yes… I have seen this before and played with it. This was a good attempt at emulating the behavior of an app like Substance Painter. However… the core problem is that in order to paint textures you need very complex and deep functionality. When using Substance, I have to variously consider: the texture channels (e.g. color, roughness etc), the many layers that may serve these channels, the baked texture channels (e.g. ambient occlusion, normals etc), the many blend modes, masks and adjustments that serve and interconnect all these.
I doubt that anything other than native blender functionality could serve all this with any elegance.
I teach substance painter and it does a good job and hiding all this complexity until you need it. It is very common for students to underestimate it as an app… to view it as just photoshop for 3D.
Procreate allows to load and paint 3d models. It is nothing like Substance painter, but it might work for some usages.
Not even a close comparison. There’s painting colors and then there’s texture painting (masks, alphas, normals, specular).
Yes. And you can paint textures too with Procreate. But like a said, not as advanced as Substance painter and others. It might still be enough for some use cases.
Source: https://procreate.com/procreate/3d
To cheer you up, in my experience over the existence of the App Store, anytime something like this comes to the Store is a big win for independent side projects. Your project might possibly be way cheaper and solve a specific problem, so it would benefit from the awareness that Blender's large marketing footprint would inevitably leave behind ;) Keep building!
I'm currently in contact with the Blender team to see where I could contribute, but you're right that there is space for multiple projects.
I think I'm going to focus more on CAD / architectural use cases, instead of attempting feature parity with Blender's main selling points (rendering, hard-surface modeling, sculpting).
> Your project might possibly be way cheaper
Eh ... blender is open source.
>> my side-project just became a little bit less relevant.
Blender has a pretty big learning curve. Since your app has a much narrower focus, you can still make something a lot of people will use.
Having been in the app development game for a long time, I know the feeling but have also learned to realize that this is actually not a negative; it means there's a strong signal that there's a desire for 3D apps on these touch devices. Competition can be really good. And your app has the ability to be more focused vs a legacy app that has to please a very large user-base who've come to expect it to behave a certain way.
That's a good way of looking at it!
I'm definitely aiming to build a more focused app compared Blender, as I want to focus explicitly on modeling, e.g. BRep or NURBs.
What kind of apps have you worked on?
Well, funny enough I'm actually working on a visionOS sculpting app at the moment! It's metaballs based, and kind of a different vibe / niche than what you're going for.
Eg, more like a sketch than asset production, and metaballs-focused which obviously is going to create a very different topology than NURBS, etc.
My first app was Polychord for iPad which is a music making app that came out in 2010.
These days Vibescape for visionOS is a big one for me, and there are others. I also worked in the corporate world for about a decade working on apps at Google, etc.
On the contrary, your project just became even more relevant. Blender badly needs an alternative/competitor. Everybody loses if a single project dominates.
Maya, 3D Studio Max, Cinema 4D...
Blender has a ton of competitors. They're all commercial and have corporate backing. If anything, blender is the "little guy". It is utterly amazing what Ton has managed to do with Blender.
Calling Blender an underdog isn't accurate at all. It has easily the most reach and biggest use base of all of them.
I would think Maya is the most influential of all of them. Blender is popular among hobbyists and people who aren't able to shell out a few thousand every year, but Maya dominates in the commercial world. Plus many animators are using Unreal Engine just for traditional animation now
Absolutely correct.
Blender is absolutely an underdog in commercial studios. It is used, but it’s the minority tool for professional settings. There are still several areas blender is lacking compared to maya or 3dsmax.
[dead]
blender is fortunately open-source...
So is Chromium. But, like with web browsers, competition is always good.
Blender is nothing like chromium. It's not made by a big company, it sprung up in an extremely for-profit niche (and it has like 4 serious competitors that are all actively in use)
and chromium is (arguably) great. lot's of great browsers are based off it at least.
Have you see picocad? It is hands down one of my favorite pieces of software
I haven't used picocad, but I came across it once before. It looks adorable! I'll definitely check out its UX.
have you seen the guy doing Feather 3D for iPad? there's a lot of demand for 3D on touch screens, but hard to find the how.
I didn't know about Feather 3D, but it looks super aesthetically pleasing. I'll have to try it out.
I tried uMake a while back, but found the 3D viewport navigation a bit hard to use, and would often find out I had been drawing on the wrong plane after orbiting the camera.
After using something like Tilt Brush in VR, it's hard to go back to a 2D screen that doesn't instantly communicate the 3D location of the brush strokes you're placing.
The concept of a pen tablet friendly 3d modelling app was pioneered by an indie dev many years ago: https://moi3d.com/
I haven't done much designing. Starting with a cube and sculpting, transforming seems the Blender's approach. Are there any other approaches for designing 3-D shapes and assembling them?
Basically, Blender says "start with a cube". I want to ask why and what are the other options.
Blender doesn't say "start with a cube". Its default "General" scene has a cube. You can add whatever mesh, text, curves, metaballs, grease pencil, volumes, etc you want.
As far as workflows go, there are far too many to list, and most artists use multiple, but common ones are:
* Sculpt from a base mesh, increasing resolution as you go with remeshing, subdivision, dyntopo, etc. * Constructively model with primitives and boolean modifiers. * Constructively model with metaballs. * Do everything via extrusion and joining, as well as other basic face and edge-oriented operators. * Use geometry nodes and/or shaders to programmatically end up with the result you want.
Finally, since we're a lot of programmers here, all of Blenders functionality is exposed as a Python API, that can be run directly from within Blender or as an addon/extension. Handy when you're doing a bunch of repetitive stuff. https://docs.blender.org/api/current/index.html
I think "hard surface modeling" is a keyword you might want to look up. Also look into traditional CAD and "parametric design". All of these have considerable overlap and are focussed on mostly "boxy shapes" with some "free form surfaces" added.
On the other end of the spectrum you have "sculping" for organic shapes and you might want dig into the community around ZBrush, if you want to fill the gap from "start with a cube" to real sculpting.
More niche, but where I feel home, is the model with coordinates and text input approach. I don't know if it has a real name, but here istwhere is pops up and where I worked with it:
- In original AutoCAD (think 90s) a lot of the heavy lifting was done on an integrated command line hacking in many coordinates
- POVRay (and its predecessors and relatives) has a really nice DSL to do CSG with algebraic hypersurfaces. Sounds way more scary than it is. Mark Shuttleworth used it on his space trip to make a render for example.
- Professionally I worked for a couple of years with a tool called FEMAP. I used a lot of coordinate entry and formulas to define stuff and FEMAP is well suited for that.
- Finally OpenSCAD for a contemporary example of the approach.
There are lots of way to construct geometry.
- sculpting. You start with a high density mesh and drag the points like clay.
- poly modeling. You start with a plane and extend the edges until you make the topology you want.
- box modelling. You start with a cube or other primitive shape and extend the faces until you make the shape you want.
- nurbs / patches : you create parts of a model by moving Bézier curves around and creating a surface between them
- B-reps / parametric / csg / cad / sdf / BSP : these are all similar in that you start with mathematically defined shapes and then combine or subtract them from each other to get what you want. They’re all different in implementation though
- photogrammetry / scans : you take imagery from multiple angles and correlate points between them to create a mesh
Every Blender tutorial starts with "hit a, hit x to delete the default objects." I don't think this is a valid point.
There are many approaches. For example, constructive solid geometry, where you use boolean operations on simple 3D shapes to create a more complex object. Another one is the extrusion of 2D shapes, which adds volume to 2D shapes.
My main tool is OpenSCAD, as I am a programmer and I model for 3D printing, which support both CSG and extrusion, but not the mesh manipulation like Blender.
One paradigm is not better than the other. Blender is nice for modelling, for example, characters, but it's really painful for doing precise shapes. For modelling multi-part action figures for 3D printing you'll need both things, sometimes modelling something in a tool and finishing it in other
For skulpted works (best for organic and free-form style character and object designs) starting with a "clay sphere" is also common. In terms of app for the iPad: Nomad Skulpt is well favoured there.
Other popular approaches to designing shapes are using live addition/subtraction, Nomad supports that as well, but it's not as intentional as say 3D Studio Max (for Windows) - where animating those interactions is purposely included as their point of difference.
There's also solid geometry modelling, this is intended for items that would be manufactured, for that SolidWorks is common.
Then finally you have software like Maya which lets you take all manner of approaches, including a few I haven't listed here (such as scripted or procedural modelling.) The disadvantage here is that the learning curve is a mountain.
And if you like a bit of history, PovRay is still around where you feed it text mark-up to produce a render.
I'm curious about this too. On the engineering side, tools like Solidworks and Fusion start from extruding a planar sketch— which is a model that maps well to conventional manufacturing techniques, but isn't very artistic.
Of note, there is an interesting parallel between this sketch/extrude/cut approach that Solidworks uses, and the Brush approach you used to use to make Half-Life and Source engine maps. (I don't know what the current tools, e.g. for Unreal use).
You want Rhino3d for creative + manufacture
Open a new blender file and select Sculpt and you start with a high poly sphere. The default cube is not suitable for sculpting unless you subdivide it several times.
You can create whatever start up file you want.
Other approaches include subsurface workflows, metaballs, signed distance functions, procedural (geometry nodes) and grease pencil.
[dead]
This is excellent news. So many artists are now using procreate on iPad Pros as their primary platform. I do not miss the days of using puppet to juggle the configs of various overly expensive and user hostile dcc software. The barrier to entry used to be so high for designers.
I teach digital painting, and Procreate is slowly becoming my enemy. I fully appreciate its ease of use, its fantastic union with Apple Pen and certainly my students love it. But doing design/creative work on a small screen is not healthy, especially for complex images. neither is it easy to maintain a complex workflow, such as that required by matt painting and multi-layer compositing. Also, any presence of tablets in a design teaching lab is never pretty... I can't easily review their files or integrate their output into a pro desktop app.
The other advantage to procreate+ipad is that it’s easy to take with you. It’s effectively the new sketchbook.
It’s just nice to be able to draw on the go or not be tied to a Cintiq.
You can connect the iPad to a larger screen and use it like a Cintiq though.
It’s definitely not fully featured enough for more complex art but the appeal is more than just the accessibility.
Almost all my professional concept artist friends have switched over, but I agree it’s not a great fit for matte paintings.
Cost is likely another big reason it’s popular with students. A $13 one time purchase is hard to beat… even with edu pricing Adobe CC quickly gets more expensive. Clip Studio Paint falls somewhere in the middle.
Not only apps like Blender.
I really want to be able to pencil write on the coding apps on the go, now that handwritting reckognition has gotten good enough, but so far most of them provide a very lame experience.
It is like people don't really think out of the box on how to take advantage of new mediums.
Same applies to all those AI chat boxes, I don't want to type even more, I want to plain talk with my computer, or have AI based workflows integrated into tooling that feel like natural interactions.
I think it just depends on preferences. For me typing is significantly more "natural" than writing with a pen/pencil, which I've probably resorted to less than once a month over the last few years.
Regardless of the preference, the tools should provide both workflows optimally, unfortunately this is yet another example where iOS/iPadOS world is leading, where on Android I can hardly recommend something really usable.
For anyone who misses the Second Life-style camera controls in Blender, I made this during the weekend: https://github.com/miunau/blender-secondlife-camera-addon
It makes navigating 3D spaces so much easier with keyboard and mouse.
> The initial platform where this idea will be tested is the Apple iPad Pro with Apple Pencil, followed by Android and other graphic tablets in the future.
Asus has been releasing year after year two performance beast, with a very portable form factor, multi-touch support and top of the line mobile CPUs and GPUs: the X13 and Z13
https://rog.asus.com/laptops/rog-flow-series/?items=20392
Considering the Surface Pro line also gets the newest Rizen AI chips with up to 32Gb of RAM, having them as second class citizen is kinda sad.
PS: blender already runs fine as-is on these machines. But getting a new touch paradigm would be extremely nice, and would be a better test best than a new platform IMHO.
Agree here, they already have a good test bed of devices and the X13 is a bold attempt. (only limitation is the Snapdragon version).
I got a Lenovo Yoga precisely because it has a good drawing experience and the Lunar Lake Intel SoCs that have GPU acceleration in Blender.
Yes.
And I fumbled the CPU attribution for the Surface devices, they are of course Intel processors (Intel Core Ultra Series 2) for business. The customer line stays on Snapdragon.
I used some build of blender on windows mobile, crazy how efficiently it worked on 400Mhz HTC Niki, just the screen size, so the performance part seems to be done since forever :D
Replying mainly to the title - but I'm surprised VR based 3D modeling never took off. I've only dabbled with blender, I know it's a powerful tool, but the learning curve just to navigate the interface is steep - compared to some creativity VR programs which felt instantly intuitive for 3D modeling. I guess for a professional digital artist, having fine technical control of your program is more important.
Modelling is fairly laborious.
Almost all VR devices require lots of motion, have limited interaction affordances and have poor screens.
So you’re going to be more tired with a worse experience and will be working slower.
What XR is good for in the standard creative workflow is review at scale.
Blender does have an OpenXR plugin already but usability is hit or miss.
People do use VR for modeling/animation in Blender - https://freebirdxr.com (disclaimer: I'm the author of this plugin)
Freebird is still quite basic, but it's under active development and more modeling and posing features are being added.
So a core set of people definitely use VR in Blender for creation/editing, but I agree the number is quite small.
The biggest holdup for VR sculpting is you have nowhere to rest your hands or tools. In a physical medium you can rest your weight and tools against the clay, glass etc. that you're working with.
This is part of the reason why high end 3d cursors have resistive feedback, especially since fine motor control is much easier when you have something to push against and don't have to support the full weight of your arm.
I hope the idea of blender apps is still alive. https://code.blender.org/2022/11/blender-apps/
could be really powerful in conjunction with blender being supported on iPad (and possibly iphone).
Maybe the problem is just impossible or maybe AI assistance will solve it but it's crazy to me how complex 3d software like Blender/Maya/3DSMax/Houdini etc still are. There are 1000s and 1000s and 1000s of settings, Deep hierarchies of complexity. And mostly to build things that people built without computers and only a couple a few tools in the past. I had hoped VR (or AR) might some how magically make this all more approachable but no, it's not much easier in VR/AR. The only tool I've seen in the last 30 years that was semi easy was the Spore Creature creator though of course it had super limits.
I guess my hope now is that rather than select all the individual tools I just want AI to help me. At one level it might be it trying to recognize jestures as shapes. You draw a near circle, it makes it a perfect circle. Lots of apps have tried this. unfortunately they are so finicky and then you still need options to turn it off when you actually don't want a perfect circle. Add more features like that and you're back to an impenetrable app. That said, maybe if I could speak it. I draw a circle like sketch and say "make it a circle"?
But it's not just that, trying to model almost anything just takes soooooooooo long. You've got learn face selection, vertex selection, edge selection, extrusion options, chamfer options, bevel options, mirroring options, and on and on, and that's just the geometry for geometric based things (furniture, vehicles, buildings, appliances). Then you have to setup UVs, etc.....
And it gets worse for characters and organic things.
The process hasn't changed significantly since the mid-90s AFAICT. I leared 3ds in 95. I learned Maya in 2000. They're still basically the same apps 25-30 years later. And Blender fits right in in being just as giant and complex. Certain things have changed, sculpting like z-brush. Node geometry like Houdini. And lots of generators for buildings, furniture, plants, trees. But the basics are still the same, still tedious, still need 1000s of options.
It's begging for disruption to something easier.
Couldn't this be said for any creative software, e.g., After Effects, IDEs, DAWs, NLEs, Photoshop, Illustrator, etc... and even a few non-creative applications like Word and Excel? (To your point, I do think 3D modeling software is the most complicated of all of these, but I think as a general rule, the complexity is more similar between these software categories versus other categories like Mail, Notes, etc...)
For my part, I think this is because to do good creative work you need minute control, and minute control essentially just means adding lots of controls to software. Sure you can mediate this with AI, the learning curve especially, but that only really helps unskilled workers and hinders skilled workers (e.g., a slider is a more efficient way for a skilled user to get an exact exposure, rather than trying to communicate that with an AI). And I don't really think there's a need for software where unskilled users have a high-degree of control (i.e., skill and control are practically synonyms).
I see your point. Yes, all those types of software are complex. And, maybe that's just the way it has to be. But, take the non-software versions. Sculpting a bust out of clay is certainly a difficult skill, but it doesn't take 3000+ hierarchical options. All it takes is a pile of clay and 2 hands. Maybe add a 1 to 5 tools. Similarly, building lots of wood furniture is a skill but doesn't take 3000+ hierarchical options. It takes just a few tools. Drawing a diagram on paper (Illustrator) takes a pencil, a ruler, maybe a compass.
I just feel like there's some version of these tools that can be 100x simpler. Maybe it will take a holodeck to get there and AI reading my mind so that I knows what I want to do without me having to dig through 7 levels of menus, sub sections, etc....
I think this point trends into HCI (e.g., https://worrydream.com/ABriefRantOnTheFutureOfInteractionDes...) but I think the overarching phenomena you're describing is that custom interfaces are usually superior to adaptions to the mouse/keyboard.
There are many areas where custom hardware interfaces are popular used in conjunction with software:
- MIDI controllers (note both for the piano keys, and for the knobs/sliders, and even foot pedals)
- Jog wheels for NLEs
- Time-coded vinyl https://en.wikipedia.org/wiki/Vinyl_emulation
- WACOM Tablets
All of these custom hardware interfaces accomplish the same thing: They make using the software more tactile and less abstract. Meaning you replace abstract-symbol lookup (e.g., remembering a shortcut or menu item) with muscle memory (e.g., playing a chord on a piano).
So TLDR, the reason that we don't have what you're looking for is that we don't have a good way to simulate clay and wood as hardware that interfaces nicely with software.
Note there's a larger point here, which I think is more what you were getting at. I think people sometimes expect (and I expected this when I was younger), that computers could invent new better interfaces to tasks (e.g., freed from the confinements of physics). Now I think this is totally the opposite, that the interfaces are usually better from the physical world (which makes sense if you think about it, often what were talking about are things that human beings have refined over thousands of years), and that enforcing the laws of physics usually actually makes things easier (e.g., we've been dealing with them since the moment we were born, we have a lot of practice).
Finally also note that custom hardware interfaces only tend to help across one axes (e.g., a MIDI controller only helps enter notes/control data). The software still ends up being complex because people also want all the things computers are good at that real world materials aren't, like redo/undo, combining back together things that have been broken apart, zooming in/out, seeing the same thing from several perspectives at once, etc...
PS I don't even know if the Holodeck or mind-link up would really help here, it's possible, but it's also possible it just difficult for our brains to describe what we want in a lot of cases. E.g., take just adjusting the exposure, you can turn it down, oh but wait I lost the violet highlight that I liked, how can I light this scene and keep that highlight and make it look natural. I don't know maybe this stuff does map to Holodeck/mind-link, but it's also possible that just having tons of options for every light really is the best solution to that.
> It's begging for disruption to something easier.
This assumes the complexity is incidental rather than inherent. I think the problem is similar to how Reality has a surprising amount of detail[1]. AI will likely eventually make a dent in this, but I think as an artist you tend to want a lot of granular control over the final result, and 3D modeling+texturing+animation+rendering (which is still only a subset of what Blender does) really does have a whole lot of details you can, and want to, control.
[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
Blender is one of my favorite open source tool.
Having a UI/UX for tablets is awesome, especially for sculpting (zBrush launched their iPad version last year - but since Maxon bought it, all is subscription only).
I joined the Blender development fund last year, they do pretty awesome stuff.
Based on the title I expected Blender to present their own programming language, something like OpenSCAD. But then they started talking about multi-touch interfaces..
Anyone else having issues with the videos playback? They don’t play at all on iPhone. Interaction design part is the most interesting for me, I’m curious to see what the team has come up with.
was expecting this to be voice control. guess vibe blending will have to wait.
Show HN: MCP server for Blender that builds 3D scenes via natural language
https://news.ycombinator.com/item?id=44622374 (2025-07-20; 62 comments)
Probably a useless submission but the discussion linked to the real thing at https://github.com/ahujasid/blender-mcp
Given that it's Blender, I was expecting it to be about 6DoF controllers.
I think they already work in blender
(there even is https://www.printables.com/model/908684-spacemouse-mini-slim... - which I know works in freecad)
Blender already has "NDOF" settings for use with space mouse and similar tools. I'm no Blender expert, but those are indispensable as a user coming from a CAD background.
Blender and FreeCAD are about the only two tools I use that are keeping me from selling my laptop.
Speaking of interfaces, when will we have one that works just by thinking—something less intrusive than Neuralink—that lets us control not just Blender, but the entire computer? I think my productivity would increase a lot...
I worked in non invasive BCIs for a couple of years (this was about 7 years ago). My current horizon estimation for a “put a helmet and gave usable brain computer interface” is never.
With implants, we are probably decades away.
What currently works best is monitoring the motor cortex with implants as those signals are relatively simple to decode (and from what I recall we start to be able to get pretty fine control). Anything tied to higher level thought is far away.
As for thought itself, I wonder how would we go about it (assuming we manage to decode it). It’s akin to making a voice controller interface, except you have to tell aloud everything you are thinking.
Have you kept up with recent ML papers like MindEye, which have managed to reconstruct seen images using image generator models conditioned on fMRI signals?
Ever since that paper came out, I (someone who works in ML but have no neuroimaging expertise) have been really excited for the future of noninvasive BCU.
Would also be curious to know if you have any thoughts on the several start-ups working in parallel on optimally pumped magnetometers for portable MEG helmets.
> Have you kept up with recent ML papers like MindEye, which have managed to reconstruct seen images using image generator models conditioned on fMRI signals?
Not really. I left the field mostly because I felt bitter. I find that most papers in the field are more engineering than research. I skimmed through the MindEye paper and don’t find it very interesting. It’s more of mapping of “people looking at images in a fMRI” to identifying the shown image. They make the leap of saying that this is usable to detect actual mind’s eye (they cite a paper where they requires 40 hours of per-subject training, on the specific dataset) which I quite doubt. Also we’re nowhere near having a portable fMRI.
As for portable MEG, assuming they can do it: it would be indeed interesting. Since it still relies on synchronized regions I don’t think high level thinking detection is possible but it could be better for detecting motor activity and some mental states.
Apple are introducing BCI support in their new OS builds.
In theory, if Blender exposed their UI to the Apple accessibility system , it would let you use things via BCI.
is there anything like blender for the quest / pcvr?
I think Blender supports VR natively, though I never tested it myself.
And there are also add-ons around that: https://docs.blender.org/manual/en/latest/addons/3d_view/vr_...
> The feature set is limited to scene inspection use cases. More advanced use cases may be enabled through
:c
huh i know what iam doing next saturday
Not quite like Blender, but Openbrush is worth trying.
https://openbrush.app/
I know this is the classic eye-roll question, but is support planned for linux/desktop devices? I imagine the future android app could be used via waydroid but seeing how VLC could bridge the gap, perhaps?
I’m not sure I understand your question. Blender is already available on Desktop Linux.
Are you looking to use Blender on a small touch screen backed by desktop Linux?
I have a small touchscreen linux device I use to view HN via 4g, it is a umpc laptop from donki called the nanote next, using the giant blender interface would be greatly enriched on that tiny device if I were to use an android experience.
In theory yes, you should be able to use this new simplified interface. It may require compiling blender yourself perhaps
I'll see what I can cook up... let's hope it could be enabled with a flag and doesn't require some inaccessible system apis
I will note that a Wacom One gen 1 screen/graphics tablet worked perfectly on a Raspberry Pi 4 when I tested it ages ago:
https://www.reddit.com/r/wacom/comments/16215v6/wacom_one_ge...
Looking into using the new gen 2 w/ touch on an rPi 5.
The Wacom kernel drivers are so nice, especially with the neat little interface GNOME has in the settings. I got a secondhand Wacom tablet from 2002 at a garage sale that serves it's duty signing PDFs and sculpting in Blender on those rare occasions where it's needed.
Makes me wonder if anyone's playing osu! on their Steam Decks...