Really happy to see it, after 25 years (https://www.bugcrowd.com/glossary/cross-site-scripting-xss/) of surviving without it. It always struck me as an obvious missing part of the DOM API, and I still don't know why it took this long time.
But mostly I'm just happy that it's finally here, I do appreciate all the hard work people been doing to get this live.
I'll be very excited to use this in Lit when it hits baseline.
While lit-html templates are already XSS-hardened because template strings aren't forgeable, we do have utilities like `unsafeHTML()` that let you treat untrusted strings as HTML, which are currently... unsafe.
With `Element.setHTML()` we can make a `safeHTML()` directive and let the developer specify sanitizer options too.
Two, even if we did, DOMPurify is ~2.7x bigger than lit-html core (3.1Kb minzipped), and the unsafeHTML() directive is less than 400 bytes minzipped. It's just really big to take on a sanitizer, and which one to use is an opinion we'd have to have. And lit-html is extensible and people can already write their own safeHTML() directive that uses DOMPurify.
For us it's a lot simpler to have safe templates, an unsafe directive, and not parse things to finely in between.
A built-in API is different for us though. It's standard, stable, and should eventually be well known by all web developers. We can't integrate it with no extra dependencies or code, and just adopt the standard platform options.
The app developers can still use that right now, but if the framework forces it's usage it'd unnecessarily increase package size for people that didn't need it.
So that's why template literals are broken. I am not much of a JS dev but sometimes I play one on TV. and I was cursing up a storm because I could not get templates to work the way I wanted them to. And I quote "What do you mean template strings are not strings? What idiot designed this."
If curious I had a bright idea for a string translation library, yes, I know there are plenty of great internationalization libraries, but I wanted to try it out. the idea was to just write normalish template strings so the code reads well, then the translation engine would lookup the template string in the language table and replace it with the translated template string, this new template string is the one that would be filled. But I could not get it to work. I finally gave up and had to implement "the template system we have at home" from scratch just to get anything working.
To the designers of JS template literals, I apologize, you were blocking an attack vector that never crossed my mind. It was the same thing the first time I had to do the cors dance. I thought it was just about the stupidest thing I had ever seen. "This protects nothing, it only works when the client(the part you have no control over) decides to do it" The idea that you need protection after you have deliberately injected unknown malicious code(ads) into your web app took me several days of hard thought to understand.
my example: a table to lookup translated templates. most translation engines require you to use placeholder strings. this lets you use the template directly as the optional lookup key.
simplified with some liberties taken as this can't be done with template literals.
Easy enough to fake with some regexes and loops. but I was a bit surprised that the built in js templates are limited in this manner.
const translate_table = {
'where is the ${thing}':'${thing} はどこですか' ,
}
function t(template, args) {
if (translate_table[template] == undefined) {
return template.format(args);
}
else {
return translate_table[template].format(args);
}
}
user_dialog(t('Where is the ${thing}', {'thing', users_thing} ));
I even dug deep into tagged templates, but they can't do this ether. The only solution I found was a variant of eval() and at that point I would rather write my own template engine.
I've found LLMs will happily generate XSS vulnerable code, which will make things worse for a while until they can be trained better.
In fact, I found it really difficult to get claude-code to use templating libraries and not want to default to hand-written templating with XSS vulnerabilties and injecting content directly, even after going through options with it.
There's also a difference between escaping and sanitisation which can be tricky to handle and track, and it can even be dangerous to try to mix different approaches or sanitizers.
Having a safe backstop in the form of setHTML() to use will be a fantastic addition to narrow the scope of ways to get it wrong.
As someone who has dealt with more than my fair share of content injection vulnerabilities over the years this is great to see at last. It’s kinda crazy that this only coming now while other, more cumbersome solutions like CSP have been around for years.
Sounds reasonable enough to me. 99.99% of the times you’re in an actual script, if you mean to execute code, you’d just execute it yourself, rather than making a script tag full of code and sticking that tag into a random DOM element. That’s why the default wouldn’t honor the script tag and there’d be an “unsafe” method explicitly named as such to hint you that you’re doing something weird.
Yes, although a slightly more relevant way of putting it would be that it's an inbuilt DOMPurify (dompurify being an npm package commonly used to sanitize html before injecting it).
Is this basically doing the same thing as https now? But for http, and firefox just never implemented a simple fix for it's entire existence until now?
I obviously know nothing about this, but I still find it fascinating. Or am I off my block.
XSS isn't related to https/ssl, ssl is the secure conncetion between you and the server, but xss is the injection of data into the site which will be executed in your browser in this case. The connection isnt relevant.
As it turns out, verifying that HTML is safe to render without neutering HTML down to a whitelist of elements is actually quite difficult. That's not great when you're rendering user-generated content.
Solutions in the form of pre-existing HTML sanitisation libraries have existed for years but countless websites still manage to get XSS'd every year because not everyone capable of writing code is capable of writing secure code.
2. Because it’s really easy to fuck up and leak attacker controlled content in markup, especially when the environment provides tons of tools to do things wrong and none to do things right. IME even when the environment provides tons of tools to do things right it’s an uphill battle (universe, idiots, yadda yadda).
There's this newfangled concept called social media where you let other people post content that exists on your web site. You're rarely allowed to post HTML because of the associated issues with sanitizing it. setHTML could help with that.
I just had a flashback to the heyday of MySpace. Now that I think about it though, Neocities has the "social networking" of being able to discover other people's pages and give each other likes and comments.
Maybe it is then time for having something that is beyond "use strict" at the beginning auf a JavaScript document as one option to use the statement.
I think a config object in which you define for script options like sanitization and other script configuration might be helpful.
After all, there almost always need to be backward compatibility be ensured, and this might work. I am no spec guy, it is just an idea. React makes use of "use client/server", so this would be more central and explicit.
> It then removes any HTML entities that aren't allowed by the sanitizer configuration, and further removes any XSS-unsafe elements or attributes — whether or not they are allowed by the sanitizer configuration.
Emphasis mine. I do not understand this design choice. If I explicitly allow `script` tag, why should it be stripped?
If the method was called setXSSSafeSubsetOfHTML sure I guess, but feels weird for setHTML to have impossible-to-override filter.
This is primarily an ergonomic addition, so it kinda makes sense to me to not make the dangerous footguns more ergonomic in the process. You can still assign `innerHTML` etc. to do the dangerous thing.
I agree, though I also agree with the parent that the method name is a little bit confusing. "safeSetHTML" or "setUntrustedHTML" or something would be clearer.
Naming things in that manner hasn’t proven to be a good idea over the years.
When you have 2 of something and one is safe/better and the other one is known to be problematic, you give the awkward name to the problematic one and the obvious name to the safe/better one. Noobs oughtn’t to be attempting the other one, and anyone who is mature enough to have reason to do it, are mature enough to appreciate the reason behind that complexity.
Idk about that, there's a good argument that the most obvious methods should be the safe ones. That's what juniors will probably jump to first. If you need the unsafe ones, you'll probably be able to figure that out and find them quickly.
You have to make the safe version the ergonomic one. Many many C++ memory bugs are a result of the standards committee making the undefined behaviour version of an operation even 3 characters shorter than the safe one. (They're still doing it too! I found another example added in C++23 recently)
I guess they are going for a safe default... the idea is people who don't carefully read the docs or carefully monitor the provenance of their dynamically generated HTML will probably reach for "setHTML()".
Meanwhile, there's "setHTMLUnsafe()" and, of course, good old .innerHTML.
It appears you can tune what is sanitized from the input via the "sanitizer" optional parameter. The default sanitizer is however defined in a spec linked on the docs page [1] with the actual sanitize operation specified as well [2].
Ah, perfect, the "remove unsafe" operation is what I was looking for. It includes a list of elements and a list of attributes. These appear to apply regardless of the sanitizer configuration you use, the original MDN link demonstrates allowlisting "script" but seeing that it is removed anyways.
The API design could be better. Document fragments are designed to be reused.
It should accept an optional fragment key which accepts a document fragment.If not a fragment, throw, if has children, empty contents first.
It doesn't say "There's a lot of hidden sanitizing stuff inside this method" from the name...
Something like "setSafeHTML()" would be preferable. (Since it's Mozilla, there should be a few committee meetings to come up with the appropriate name)...
Do you honestly feel that we will ever be in a place for the server to not need to sanitize data from the client? Really? I don't. Any suggestion to me of "not needing to sanitize data from client" will immediately have me thinking the person doing the suggesting is not very good at their job, really new, or trying to scam me.
There's no reason to not sanitize data from the client, yet every reason to sanitize it.
If you sanitize on the server, you are making assumptions about what is safe/unsafe for your clients. It's possible to make these assumptions correctly, but that requires keeping them in sync with all clients which is hard to do correctly.
Something that's sanitized from an HTML standpoint is not necessarily sanitized for native desktop & mobile applications, client UI frameworks, etc. For example, with Cloudflare's CloudBleed security incident, malformed img tags sent by origin servers (which weren't themselves by themselves unsafe in browsers) caused their edge servers to append garbage (including miscellaneous secure data) from heap memory to some requests that got indexed by search engines.
Sanitization is always the sole responsibility of the consumer of the content to make sure it presents any inbound data safely. Sometimes the "consumer" is colocated on the server (e.g. for server rendered HTML + no native/API users) but many times it's not.
> If you sanitize on the server, you are making assumptions about what is safe/unsafe for your clients.
No. I'm making decisions on what is safe for my server. I'm a back end guy, I don't really care about your front end code. I will never deem your front end code's requests as trustworthy. If the front end code cannot properly handle encoding, the back end code will do what it needs to do to not allow stupid string injection attacks. I don't know where your request has been. Just because you think it came from your code in the browser does not mean that was the last place it was altered before hitting the back end.
People do it all the time, on any tech stack that lets you execute command strings. A lot of of early databases didn't even support things like parameterized inserts.
Are you one of today's 10000 on using parameterized queries and prepared statements?
Unless you're doing something stupid like concatenating strings into SQL queries, there's no need to "sanitize" anything going into a database. SQL injection is a solved problem.
Coming from the database and sending to the client, sure. But unless you're doing something stupid like concatenating strings into SQL statements it hasn't been necessary to "sanitize" data going into a database in ages.
Edit: I didn't realize until I reread this comment that I repeated part of it twice, but I'm keeping it in because it bears repeating.
SQL injection is solved if you use dependencies that solve it of course.
Other than SQL injection there is command or log injection, file names need to be sanitized or any user uploaded content for XSS and that includes images.
Any incoming JSON data should be sanitized, extra fields removed etc.
Log injection is a pretty nasty sort of hack that depending on how the logs are processed can lead to XSS or Command injection
It can be a complicated and error-prone process, mainly in scenarios where you have multiple mediums that require different sanitizers. Obviously you should do it. But in such scenarios, the best practice is to sanitize as close to the place it is used as possible. I've seen terrible codebases where they tried to apply multiple layers of sanitization on user input before storing to the DB, then reverse the unneeded layers before output. Obviously this didn't work.
Point being, if you can move sanitization even closer to where it is used, and that sanitization is actually provided by the standard library of the platform in question, that's a massive win.
You're making a bad assumption that client side code was the last place the submitted string was altered in the path to the server. The man in the middle might have a different idea and should always be protected against on the server where it is the last place to sanitize it.
By "sanitise" what's really meant is usually "escape". User typed their display name as <script>. You want the screen to say their display name, which is <script>. Therefore you send <script>. That's not their display name - that's just what you write in HTML to get their display name to appear on the screen. You shouldn't store it in the database in the display_name column.
Agreed. The codebase I'm thinking of was html encoding stuff before storing it, then when they needed to e.g. send an SMS, trying to remember to decode. Terrible.
Sanitize as close as possible to where it is used is usually best, then you don’t have to keep track of what’s sanitized and what’s not sanitized for very long.
(Especially important if sanitation is not idempotent!)
Easier does not mean better, which seems to be true in this case given the many, many vulnerabilities that have been exploited over the years due to a lack of input sanitization.
In this case easier is actually better. Sanitize a string at the point where you are going to use it. The locality makes it easy to verify that sanitation has been done correctly for the context. The alternative means you have to maintain a chain of custody for the string and ensure it is safe.
if you are using it at the client, sure, but then why is the server involved? if you are sending it to the server, you need to treat it like it is always coming from a hacker with very bad intentions. i don't care where the data comes from, my server will sanitize it for its own protection. after all, just because it left "clean" from your browser does not mean it was not interfered with elsewhere upstream TLS be damned. if we've double encoded something, that's fine, it won't blow up the server. at the end of that day, that's what is most important. if some double decoding doesn't happen correctly on the client, then <shrugEmoji>
I don't like this. This could be implemented as a JS library. I believe browsers should provide the minimal API so that they are smaller and easier to create. As for safe alternative to innerHTML, it is called innerText.
I think innerText and setHTML() have different purposes. The former inserts the whole string as a text leaf, while the latter tries to preserve structures that are meaningful in context.
---
Libraries can surely do the same job, but then the exact behavior would vary among a sea of those libs. Having specs defined [0] for such an interface would hopefully iron out much of these variations, as well as enabling some performance gains.
And if you need something that is not in a spec, you have to use a library anyway. Also the point was that browser should be as simple as possible and not like a whole new OS.
> I believe browsers should provide the minimal API so that they are smaller and easier to create.
That ship has long since sailed. Browsers are so complex that it takes quite some effort to support the various levels of 9s of the percentage of compatibility with standards, not to mention the browser makers themselves define many of the standards.
This code only does the most basic and naive regex filtering that even a beginner XSS course's inputs would work against. With the Node example code and input string:
Asking a chatbot to make a security function and then posting it for others to use without even reviewing it is not only disrespectful, but dangerous and grossly negligent. Please take this down.
I wonder why Cursor chose regex approach when it is widely known that it is a wrong method. Is it a result of training on low-quality forums for beginners?
Really happy to see it, after 25 years (https://www.bugcrowd.com/glossary/cross-site-scripting-xss/) of surviving without it. It always struck me as an obvious missing part of the DOM API, and I still don't know why it took this long time.
But mostly I'm just happy that it's finally here, I do appreciate all the hard work people been doing to get this live.
Yes
<sc<script>ript>
We enabled this by default in Firefox Nightly (only) this week.
I'll be very excited to use this in Lit when it hits baseline.
While lit-html templates are already XSS-hardened because template strings aren't forgeable, we do have utilities like `unsafeHTML()` that let you treat untrusted strings as HTML, which are currently... unsafe.
With `Element.setHTML()` we can make a `safeHTML()` directive and let the developer specify sanitizer options too.
Why don't you use DOMPurify right now? It's battle tested and supports configs just like this proposal.
One, lit-html doesn't have any dependencies.
Two, even if we did, DOMPurify is ~2.7x bigger than lit-html core (3.1Kb minzipped), and the unsafeHTML() directive is less than 400 bytes minzipped. It's just really big to take on a sanitizer, and which one to use is an opinion we'd have to have. And lit-html is extensible and people can already write their own safeHTML() directive that uses DOMPurify.
For us it's a lot simpler to have safe templates, an unsafe directive, and not parse things to finely in between.
A built-in API is different for us though. It's standard, stable, and should eventually be well known by all web developers. We can't integrate it with no extra dependencies or code, and just adopt the standard platform options.
Why would the framework do that?
The app developers can still use that right now, but if the framework forces it's usage it'd unnecessarily increase package size for people that didn't need it.
So that's why template literals are broken. I am not much of a JS dev but sometimes I play one on TV. and I was cursing up a storm because I could not get templates to work the way I wanted them to. And I quote "What do you mean template strings are not strings? What idiot designed this."
If curious I had a bright idea for a string translation library, yes, I know there are plenty of great internationalization libraries, but I wanted to try it out. the idea was to just write normalish template strings so the code reads well, then the translation engine would lookup the template string in the language table and replace it with the translated template string, this new template string is the one that would be filled. But I could not get it to work. I finally gave up and had to implement "the template system we have at home" from scratch just to get anything working.
To the designers of JS template literals, I apologize, you were blocking an attack vector that never crossed my mind. It was the same thing the first time I had to do the cors dance. I thought it was just about the stupidest thing I had ever seen. "This protects nothing, it only works when the client(the part you have no control over) decides to do it" The idea that you need protection after you have deliberately injected unknown malicious code(ads) into your web app took me several days of hard thought to understand.
I've written a fair number of custom template literals, and I don't understand what your complaint is. Can you share more details?
js can't use a string as a template.
my example: a table to lookup translated templates. most translation engines require you to use placeholder strings. this lets you use the template directly as the optional lookup key.
simplified with some liberties taken as this can't be done with template literals. Easy enough to fake with some regexes and loops. but I was a bit surprised that the built in js templates are limited in this manner.
I even dug deep into tagged templates, but they can't do this ether. The only solution I found was a variant of eval() and at that point I would rather write my own template engine.That's really good to hear.
I've found LLMs will happily generate XSS vulnerable code, which will make things worse for a while until they can be trained better.
In fact, I found it really difficult to get claude-code to use templating libraries and not want to default to hand-written templating with XSS vulnerabilties and injecting content directly, even after going through options with it.
There's also a difference between escaping and sanitisation which can be tricky to handle and track, and it can even be dangerous to try to mix different approaches or sanitizers.
Having a safe backstop in the form of setHTML() to use will be a fantastic addition to narrow the scope of ways to get it wrong.
As someone who has dealt with more than my fair share of content injection vulnerabilities over the years this is great to see at last. It’s kinda crazy that this only coming now while other, more cumbersome solutions like CSP have been around for years.
So `.setHTML("<script>...</script>")` does not set HTML?
Sounds reasonable enough to me. 99.99% of the times you’re in an actual script, if you mean to execute code, you’d just execute it yourself, rather than making a script tag full of code and sticking that tag into a random DOM element. That’s why the default wouldn’t honor the script tag and there’d be an “unsafe” method explicitly named as such to hint you that you’re doing something weird.
Neither does
So is this basically a safe version of innerHTML?
Yes, although a slightly more relevant way of putting it would be that it's an inbuilt DOMPurify (dompurify being an npm package commonly used to sanitize html before injecting it).
Is this basically doing the same thing as https now? But for http, and firefox just never implemented a simple fix for it's entire existence until now?
I obviously know nothing about this, but I still find it fascinating. Or am I off my block.
XSS isn't related to https/ssl, ssl is the secure conncetion between you and the server, but xss is the injection of data into the site which will be executed in your browser in this case. The connection isnt relevant.
https://developer.mozilla.org/en-US/docs/Web/Security/Attack...
This has nothing whatsoever to do with http.
I'm confused as to why you need a "safe" version if you're the one generating and injecting the HTML.
As it turns out, verifying that HTML is safe to render without neutering HTML down to a whitelist of elements is actually quite difficult. That's not great when you're rendering user-generated content.
Solutions in the form of pre-existing HTML sanitisation libraries have existed for years but countless websites still manage to get XSS'd every year because not everyone capable of writing code is capable of writing secure code.
Isn't this kinda like asking "why does my gun need a safety if I'm the only one consciously pulling the trigger"?
1. Because you commonly are not.
2. Because it’s really easy to fuck up and leak attacker controlled content in markup, especially when the environment provides tons of tools to do things wrong and none to do things right. IME even when the environment provides tons of tools to do things right it’s an uphill battle (universe, idiots, yadda yadda).
It was kind of strange to have bbcode and wiki markup specifically to avoid allowing users to use html.
Gruber’s original markdown tool passes HTML straight through, it was designed to make writing long-form content easier.
Markdown implementations can do any of that, only allowing a whitelist of HTML elements (GFM), or not allowing HTML at all.
If you generate it from completely static and known values, have at it.
If you include user-provided data, then you should sanitize it for HTML.
Why should a web page only have a single person generating and injecting HTML into it?
The analogy doesn't hold markup ;)
Whether I generate a whole page or generate a partial page and then add HTML to it is equivalent from a safety perspective.
A single company. Why would I let another company inject HTML into my page?
There's this newfangled concept called social media where you let other people post content that exists on your web site. You're rarely allowed to post HTML because of the associated issues with sanitizing it. setHTML could help with that.
I just had a flashback to the heyday of MySpace. Now that I think about it though, Neocities has the "social networking" of being able to discover other people's pages and give each other likes and comments.
Hmmm...
This is goood news for me. Finally! A safer and more predictable alternative to innerHTML.
Maybe it is then time for having something that is beyond "use strict" at the beginning auf a JavaScript document as one option to use the statement.
I think a config object in which you define for script options like sanitization and other script configuration might be helpful.
After all, there almost always need to be backward compatibility be ensured, and this might work. I am no spec guy, it is just an idea. React makes use of "use client/server", so this would be more central and explicit.
> It then removes any HTML entities that aren't allowed by the sanitizer configuration, and further removes any XSS-unsafe elements or attributes — whether or not they are allowed by the sanitizer configuration.
Emphasis mine. I do not understand this design choice. If I explicitly allow `script` tag, why should it be stripped?
If the method was called setXSSSafeSubsetOfHTML sure I guess, but feels weird for setHTML to have impossible-to-override filter.
> feels weird for setHTML to have impossible-to-override filter.
It really doesn’t. We’ve decades of experience telling us that safe behaviour is critical.
> I do not understand this design choice. If I explicitly allow `script` tag, why should it be stripped?
Because there’s an infinitesimal number of situations where it’s not broken, and that means you should have to put in work to get there.
`innerHTML` still exists, and `setHTMLUnsafe` has no filtering whatsoever by default (not even the script deactivation innerHTML performs).
This is primarily an ergonomic addition, so it kinda makes sense to me to not make the dangerous footguns more ergonomic in the process. You can still assign `innerHTML` etc. to do the dangerous thing.
I agree, though I also agree with the parent that the method name is a little bit confusing. "safeSetHTML" or "setUntrustedHTML" or something would be clearer.
Naming things in that manner hasn’t proven to be a good idea over the years.
When you have 2 of something and one is safe/better and the other one is known to be problematic, you give the awkward name to the problematic one and the obvious name to the safe/better one. Noobs oughtn’t to be attempting the other one, and anyone who is mature enough to have reason to do it, are mature enough to appreciate the reason behind that complexity.
Idk about that, there's a good argument that the most obvious methods should be the safe ones. That's what juniors will probably jump to first. If you need the unsafe ones, you'll probably be able to figure that out and find them quickly.
I like React's dangerouslySetInnerHTML. The name so clearly conveys "you can do this but you really, really, really shouldn't".
Indeed, the web platform now has setHTML() and setHTMLUnsafe() to replace the innerHTML setter.
There's also getHTML() (which has extra capabilities over the innerHTML getter).
Why not name it what it does: sanitizeAndSetHTML
Ideally this should be called dangerouslySetInnerHTML but hindsight blah blah
You have to make the safe version the ergonomic one. Many many C++ memory bugs are a result of the standards committee making the undefined behaviour version of an operation even 3 characters shorter than the safe one. (They're still doing it too! I found another example added in C++23 recently)
If you want to use an XSS-unsafe Sanitizer you have to use setHTMLUnsafe.
I guess they are going for a safe default... the idea is people who don't carefully read the docs or carefully monitor the provenance of their dynamically generated HTML will probably reach for "setHTML()".
Meanwhile, there's "setHTMLUnsafe()" and, of course, good old .innerHTML.
A script tag would be able to call setHTMLUnsafe, bypassing whatever sanitation you configured.
I’d’ve made it a runtime error to call setHTML with an unsafe config, but Javascript tends toward implicit reinterpretation rather than erroring-out.
Wouldn't that open the floodgates by allowing code that could itself call `setHTML` again but then further revise the args to escalate its privileges?
Is “XSS-unsafe” precisely defined anywhere? I assume it means “any access to the JS interpreter”, but assuming in this context seems decidedly unsafe.
It appears you can tune what is sanitized from the input via the "sanitizer" optional parameter. The default sanitizer is however defined in a spec linked on the docs page [1] with the actual sanitize operation specified as well [2].
[1] https://wicg.github.io/sanitizer-api/#dom-element-sethtml
[2] https://wicg.github.io/sanitizer-api/#sanitize
Ah, perfect, the "remove unsafe" operation is what I was looking for. It includes a list of elements and a list of attributes. These appear to apply regardless of the sanitizer configuration you use, the original MDN link demonstrates allowlisting "script" but seeing that it is removed anyways.
https://wicg.github.io/sanitizer-api/#sanitizerconfig-remove...
> This feature is not Baseline because it does not work in some of the most widely-used browsers.
This is interesting, but it appears to be in its early days as none of the major browsers seem to support it.. yet.
A sibling comment by evilpie says that it is enabled in Firefox Nightly: https://news.ycombinator.com/item?id=45674985
Actually, it exists behind an about:config as far back as 138. So if you enable it, it even works in the current ESR.
[dead]
Found a polyfill here https://github.com/mozilla/sanitizer-polyfill
The API design could be better. Document fragments are designed to be reused. It should accept an optional fragment key which accepts a document fragment.If not a fragment, throw, if has children, empty contents first.
In what way are document fragments meant to be reused?
They empty their contents into the new parent when they're appended, so they can't be meaningfully appended a second time without rebuilding them.
`<template>` is mean to be reused, since you're meant to clone it in order to use it, and then you can clone it again.
You can absolutely reuse a document fragment
https://ibrahimtanyalcin.github.io/Cahir/
the whole rendering uses a single fragment.
You can absolutely not reuse a DocumentFragment. The moment you append it to a node, the fragment is emptied.
https://dom.spec.whatwg.org/#mutation-algorithms
> To insert a node into a parent before a child [...]:
> If node is a DocumentFragment node:
> Remove its children
>Verbose I/O element
Parsing > "DocumentFragment"
Returns proc. exit status [0]/[1] for browser HTML incompatability.
So this is the easier, built in successor to
}).createHTML()?
Great functionality, terrible name.
After a minute of digging, found discussion here: https://github.com/WICG/sanitizer-api/issues/100 Perhaps it can be reopened (or a new issue can be opened) regarding naming.
I sometimes wonder whether what the DOM APIs could look like in a hypothetical world where we could start over with everything.
It looks like this isn't a standard yet.
Why? Does it not set the HTML?
It doesn't say "There's a lot of hidden sanitizing stuff inside this method" from the name...
Something like "setSafeHTML()" would be preferable. (Since it's Mozilla, there should be a few committee meetings to come up with the appropriate name)...
Well ,could it be safelySetHTML instead of setSafeHTML ?
The second one could imply the HTML is already safe while the first one is safe way to set html.
If it's just setHTML then it could imply that don't care if its safe or not.
There is already an innerHTML property for elements. This doesn't set the outer HTML, so it's literally setInnerHTML2.
Neat. I think once this is adopted by HTMX (or similar libraries) you don't need to sanitize on the server side anymore?
Do you honestly feel that we will ever be in a place for the server to not need to sanitize data from the client? Really? I don't. Any suggestion to me of "not needing to sanitize data from client" will immediately have me thinking the person doing the suggesting is not very good at their job, really new, or trying to scam me.
There's no reason to not sanitize data from the client, yet every reason to sanitize it.
If you sanitize on the server, you are making assumptions about what is safe/unsafe for your clients. It's possible to make these assumptions correctly, but that requires keeping them in sync with all clients which is hard to do correctly.
Something that's sanitized from an HTML standpoint is not necessarily sanitized for native desktop & mobile applications, client UI frameworks, etc. For example, with Cloudflare's CloudBleed security incident, malformed img tags sent by origin servers (which weren't themselves by themselves unsafe in browsers) caused their edge servers to append garbage (including miscellaneous secure data) from heap memory to some requests that got indexed by search engines.
Sanitization is always the sole responsibility of the consumer of the content to make sure it presents any inbound data safely. Sometimes the "consumer" is colocated on the server (e.g. for server rendered HTML + no native/API users) but many times it's not.
> If you sanitize on the server, you are making assumptions about what is safe/unsafe for your clients.
No. I'm making decisions on what is safe for my server. I'm a back end guy, I don't really care about your front end code. I will never deem your front end code's requests as trustworthy. If the front end code cannot properly handle encoding, the back end code will do what it needs to do to not allow stupid string injection attacks. I don't know where your request has been. Just because you think it came from your code in the browser does not mean that was the last place it was altered before hitting the back end.
How can user input be unsafe on the server? Are you evaluating it somehow?
User-generated content shouldn't be trusted in that way (inbound requests from client, data fields authored by users, etc.)
Is that a serious question?
INSERT INTO table (user_name) VALUES ...
Are you one of today's 10000 on server side sanitizing of user data?
Communicating with a SQL driver by concatenating strings containing user input and then evaluating it? wat?
I'm very interested in what tech stack you are using where this is a problem.
People do it all the time, on any tech stack that lets you execute command strings. A lot of of early databases didn't even support things like parameterized inserts.
Are you one of today's 10000 on using parameterized queries and prepared statements?
Unless you're doing something stupid like concatenating strings into SQL queries, there's no need to "sanitize" anything going into a database. SQL injection is a solved problem.
Coming from the database and sending to the client, sure. But unless you're doing something stupid like concatenating strings into SQL statements it hasn't been necessary to "sanitize" data going into a database in ages.
Edit: I didn't realize until I reread this comment that I repeated part of it twice, but I'm keeping it in because it bears repeating.
SQL injection is solved if you use dependencies that solve it of course.
Other than SQL injection there is command or log injection, file names need to be sanitized or any user uploaded content for XSS and that includes images. Any incoming JSON data should be sanitized, extra fields removed etc.
Log injection is a pretty nasty sort of hack that depending on how the logs are processed can lead to XSS or Command injection
It can be a complicated and error-prone process, mainly in scenarios where you have multiple mediums that require different sanitizers. Obviously you should do it. But in such scenarios, the best practice is to sanitize as close to the place it is used as possible. I've seen terrible codebases where they tried to apply multiple layers of sanitization on user input before storing to the DB, then reverse the unneeded layers before output. Obviously this didn't work.
Point being, if you can move sanitization even closer to where it is used, and that sanitization is actually provided by the standard library of the platform in question, that's a massive win.
You're making a bad assumption that client side code was the last place the submitted string was altered in the path to the server. The man in the middle might have a different idea and should always be protected against on the server where it is the last place to sanitize it.
By "sanitise" what's really meant is usually "escape". User typed their display name as <script>. You want the screen to say their display name, which is <script>. Therefore you send <script>. That's not their display name - that's just what you write in HTML to get their display name to appear on the screen. You shouldn't store it in the database in the display_name column.
Agreed. The codebase I'm thinking of was html encoding stuff before storing it, then when they needed to e.g. send an SMS, trying to remember to decode. Terrible.
Sanitize as close as possible to where it is used is usually best, then you don’t have to keep track of what’s sanitized and what’s not sanitized for very long.
(Especially important if sanitation is not idempotent!)
It's arguably easier just to sanitise at display time otherwise you have problems like double escaping.
Easier does not mean better, which seems to be true in this case given the many, many vulnerabilities that have been exploited over the years due to a lack of input sanitization.
In this case easier is actually better. Sanitize a string at the point where you are going to use it. The locality makes it easy to verify that sanitation has been done correctly for the context. The alternative means you have to maintain a chain of custody for the string and ensure it is safe.
if you are using it at the client, sure, but then why is the server involved? if you are sending it to the server, you need to treat it like it is always coming from a hacker with very bad intentions. i don't care where the data comes from, my server will sanitize it for its own protection. after all, just because it left "clean" from your browser does not mean it was not interfered with elsewhere upstream TLS be damned. if we've double encoded something, that's fine, it won't blow up the server. at the end of that day, that's what is most important. if some double decoding doesn't happen correctly on the client, then <shrugEmoji>
Yeah as an Irish person with an apostrophe in their name this attitude is why my name routinely gets mangled or I get told my name is invalid.
You don’t escape input. You safely store it in the database and then sanitize it at the point where you’re going to use it.
Since 5 years everybody says Jquery us no longer necessary. But really baisc function like this take a long time for replacing Jquery
jQuery does not sanitize HTML. This is why jQuery is no longer necessary, even if people think it is.
There is the jquery bashing again. let sanitizedHTML = $('<div>').text(unsanitizedHTML).html();
I don't like this. This could be implemented as a JS library. I believe browsers should provide the minimal API so that they are smaller and easier to create. As for safe alternative to innerHTML, it is called innerText.
I think innerText and setHTML() have different purposes. The former inserts the whole string as a text leaf, while the latter tries to preserve structures that are meaningful in context.
---
Libraries can surely do the same job, but then the exact behavior would vary among a sea of those libs. Having specs defined [0] for such an interface would hopefully iron out much of these variations, as well as enabling some performance gains.
[0]: https://wicg.github.io/sanitizer-api/#dom-element-sethtml
And if you need something that is not in a spec, you have to use a library anyway. Also the point was that browser should be as simple as possible and not like a whole new OS.
> I believe browsers should provide the minimal API so that they are smaller and easier to create.
That ship has long since sailed. Browsers are so complex that it takes quite some effort to support the various levels of 9s of the percentage of compatibility with standards, not to mention the browser makers themselves define many of the standards.
[dead]
Cursor build a pseudo-sethtml: https://github.com/skorotkiewicz/pseudo-sethtml
This code only does the most basic and naive regex filtering that even a beginner XSS course's inputs would work against. With the Node example code and input string:
The program outputs: Asking a chatbot to make a security function and then posting it for others to use without even reviewing it is not only disrespectful, but dangerous and grossly negligent. Please take this down.I wonder why Cursor chose regex approach when it is widely known that it is a wrong method. Is it a result of training on low-quality forums for beginners?
It does seem like a weirdly bad result. I got something more sensible that used DOMParser when I gave GPT-5 the following prompt:
> Write a JavaScript function for sanitizing arbitrary untrusted HTML input before setting a DOM element’s innerHTML attribute.
I won’t post it here in case someone tries to use it, but it wasn’t just doing regex munging.