It has already been discussed internally that there will not be any AI integration in the beta of RapidWeaver Elements. We are working hard on developing and polishing up the core features and elements so that you can get a beta version in your hands to play with as soon as possible.
With that said, AI is not slowing down. A lot of apps are integrating AI in some form or the other, and even Apple is getting SIRIus about AI this year.
So the question is, if there were to be any AI integration in RapidWeaver Elements down the road, what would y’all like to see, either from us, or from third-party developers?
I’ll start, I think it would be useful to have some kind of image generation tool in the resources panel. Something like DALL·E 3, Midjourney, Stable Diffusion, etc. where users could generate images for their website.
Perhaps something like that could be built into the Health Checker. Currently it displays SEO related stuff that a user might have missed when creating their pages, and links them to the area to fix it. It gives rudimentary advice though, it assumes the user knows the best way to fix it (like the best description to enter or the best browser title to enter in the below example).
I could see where AI could be used here to give recommendations based on the page’s content on what the page title and description could be, with the ability for the user to modify it.
Indeed that is a good reference. Image generation, text generation, and SEO optimization seem like three big areas where AI could shine when building a website.
Exactly, users could bring their own API key (OpenAI, Anthropic, Stable Diffusion, etc.) and just plug it into RapidWeaver in some settings panel somewhere, and bam, instant AI capabilities.
Absolutely agree what @MultiThemes said. Elementor is for me currently the reference for AI content creation within a website builder. Also in Divi it is well implemented but pretty expensive.
It would be really great if I could integrate the AI tools I currently use -Open AI and Leonardo AI via an API key.
A lot of the features suggested sound a lot like AI-washing. They all sound helpful in using a classic app, however, I don’t think they address the big picture. Apps–all apps–will be fundamentally changing with deep GenAI/ML integration.
As an example, I watched Dan’s website-building exercise with Elements today. Two years ago I would have found the ability to drag elements onto a blank page to replicate a design I like as powerful. However, today, I can grab a screenshot of a page that I like, drop it into ChatGPT, and get all of the code generated to duplicate that site with no effort on my part. When ChatGPT’s 4o model releases to all users the ability to add video analysis (like it does with a screenshot), it will be able to add things like animation code the website has just from a quick screen capture.
First, I think that fine-tuning a website is still much easier with a tool like Elements. So I think there is a place for Elements in a GenAI world. However, I think that fundamental things have to change. For example, being able to give a text (or voice if you can talk to the app…maybe by Siri Pro), description, and have a chat with Elements to build out a website using its components rather than just raw code. This would be similar to how Figma builds out WordPress frameworks but built on Elements much better UI. If I have a place where the website can have a lot of descriptions for things like type of business, type of website, styles, etc., then anything added would use that as the basis.
Perhaps the ability to build out actual element components themselves by describing them. This would be similar to a Stacks builder and for ChatGPT users, a custom GPT builder.
I could go on, but basically, a complete rethink of how GenAI is going to be used in the very near future and how to best set up Elements to be a tool and environment to realize the ideas that people have using the text and images.
@jscotta Hi, I really doubt it’s that simple. I regularly use the AI including ChatGPT now 4o, I correct its errors just as regularly (whether they are understanding, execution, information…). A simple example among many others: I ask him to optimize a .htaccess and he provides me with incorrect content that cannot work (syntax error, poorly managed variable, etc.). As a help to save a lot, really a lot of time, the AI is great, to provide reliable content much less. It will improve for sure but as I like to play, hmm test the different solutions, I find the same errors or types of errors in other AIs. This leads me to think that these errors are structural and not cyclical and that correcting them requires a profound modification. For the moment, it is the statistical processing that is improving. Moreover, could it be otherwise, I always remind that the important term in AI is not the I but the A. Maybe just the reflections of a dino who learned to program (we didn’t say code yet) with turbobasic and turbo pascal then Borland C++. Yes, I know it’s dated… and bought his first computer a long time ago Sinclair ZX81 with the 16Ko RAM extension to be assembled of course So that there is no ambiguity: I like the AI and I want it to progress to allow us to do what we cannot do alone today (this is already the case for drawing as far as I am concerned) but I am realistic, it still takes time to achieve it. So I really hope that it will be developed for the GDPR and accessibility from the next update… With self-integration with RW Elements. Christmas time
Good morning, Bruno! You must be a young pup ! I started out with an Altair 8080. I also had a Sinclair, but preferred to code with the TRS-80 at that time. Those were the days! Pascal?! That is chalkboard language!! With the Altair it was Z-80 binary. Eventually, moved to a board computer with a hexpad and LED readout…a huge improvement over the panel on the Altair. With the TRS-80, BASIC was great, but really got going when I could save work to an audio cassette player. Hard drives and even floppy drives were things we heard about in data centers. Eventually, I was able to move to mainframes and punch cards (my first was a Xerox Sigma 9…it was ugly). Sorry…enough of the old-guy war storying. We should chat more on this later…just for fun. On to the business of the day!
Anyway, yes, GenAI has issues. However, I’ve found in my use that incorrect information and hallucinations are directly impacted by the priming and prompting (including good use of RAG). For example, when I used ChatGPT for assistance with AppleScript it often gives me errors that require a lot of back and forth iterations in the chat to correct. However, when I created a custom GPT and provided it with reference documents that include the full current AppleScript documentation along with specific app Dictionaries, those issues are virtually eliminated. And, as you pointed out, it gets better every day. BTW…have you played with Figma’s AI website generation tool?
I’d love to see AI in Elements be able to generate fully editable templates (either a full page or just a component) based on a screenshot, wireframe, sketch, or even text description of a design or layout you’d like to work with (as a starting point).
Along the same lines it would be nice to be able to have AI in Elements generate a number of variations on a template or layout to consider/test, and then to be able to refine those variations as required.
AI could completely change how you see web development and thus require an entirely new product and/or you could just look at how to integrate various GenAI capabilities to Elements to improve it. There is so much to discuss here. And a lot will depend on how much you are willing to step outside of the pure Apple ecosystem for technology. I say that because my connections are giving me the feeling that nothing really is changing at Apple (with a few exceptions) regarding product development and deployment. So I think it is possible that Apple might be behind the curve for quite a while. I do so hope I am wrong.
If Apple provides you with the tools for actual AI agents, then add a chat window (your pick on style depending on your UX desires) so that people can tell Elements what they want to be done and have Elements execute it.
For example (prompt):
Using the reference documents I’ve provided, create a page with children pages for the desired content in the documents. Perhaps use one of the favorite page templates. If you have suggested content for areas that are not complete, provide suggestions. Provide or create suggestion images as needed to fit the current branding and content needs.
Result: The agent uses the Elements components to create the desired pages and all required elements. You then go in and either with additional text prompting, or manually, refine the design.
Help System
Do you want a low-hanging fruit? Provide the Elements Help documentation in a GenAI chatbot and not the crappy, 1990’s style Apple help system.
Has anyone here used Relume? Well…I think that is a good example to consider for Elements and AI usage. There are several potential advantages Elements can have over Relume:
Local versus web based
Everything done in one app (Relume needs to export to another tool to actually build the website).
The responsive Realmac team for support and development
Seriously, a local Mac app with component building and access to everything the local Apple ecosystem can offer (hopefully more with WWDC) looks like a big win.
@jscotta Hi, I just had a look to relume and webflow… It’s a gigafactory for ideas. I’m totally agree with you and your plan for Realmac, so Dan team : on board ?
Well…in a few days now we will see if Apple will give the developers the tools to build really innovative stuff into their apps and if the Apple ecosystem will multiply that ability with great application cooperation and action.
I say that because I’m tired of having to build out my clients AI automations with tools like Zapier, Make, Bubble, etc. I would much rather only work with Apple ecosystem and the privacy and lower costs and control of localized AI and automation systems. That is one of the primary reasons I’ve such a PITA to Dan and Realmac. Even though I think I’ve been using Rapidweaver since about the earliest version (way back near the turn of the century), I doubt they have any idea who I am except to describe me as that annoying guy in the forums.
Anyway, here is to hoping that Apple gives Realmac, Tap Forms, and the whole suite of Apple apps tools they can use to revolutionize not just the Mac, but the world. Elements could be soooo powerful with the right tools.
Well I don’t think about you as an annoying guy. Obviously we are no longer as young as we were (it is not a criticism, age does not only have flaws, the proof) and we are always on the lookout for the future. I think it’s the essential part for being well alive. General mac Arthur said that age is not a matter of years but of state of mind: there are young people who are old in spirit and old who are young in spirit. I’m totally agree with that… In fact, I agree more with him every day that ends I am really surprisingly surprised (weird formulation) by the progress of the AI. Thank you again for allowing me to discover more.