Skip to content
Deftkit

Fake Data Generator — Names, Emails, Addresses & More

Generate realistic fake data for testing and development. Names, emails, phone numbers, addresses, UUIDs, dates — choose fields, set row count, export as JSON, CSV or SQL.

Fields

What is a fake data generator?

A fake data generator produces realistic-looking but entirely fictitious records — names, email addresses, phone numbers, physical addresses, dates, and other field types — on demand. Developers use it to seed databases with plausible rows before a real dataset exists, populate UI mockups with human-readable content, build API fixtures for frontend development, and write unit or integration tests that exercise form validation without touching production data.

The word "fake" is the point. The output looks like real data — proper names, formatted phone numbers, valid-looking email addresses — but every value is synthetically generated. No record corresponds to any actual person, account, or transaction. That distinction matters enormously when your team works under GDPR, CCPA, or similar data minimization requirements.

This tool is a browser-based dummy data generator: pick your fields, set the row count, choose JSON, CSV, or SQL as the output format, and click Generate. Everything runs client-side. Nothing is sent to a server.

Why use a mock data generator instead of real data?

Using real customer records in development or staging environments is a compliance risk and a liability. GDPR Article 5 requires that personal data be used only for the purpose it was collected; dumping a production user table into a test database violates that principle by default. Even where regulation does not apply, real data in dev environments means real email addresses get test emails, real phone numbers get called, and real names appear in screenshots shared with the wider team.

A test data generator sidesteps all of that. The synthetic records are structurally identical to real ones — same column types, same field lengths, same cardinality — so your queries, indexes, and validation logic behave exactly as they would on production data. When the sprint is over, there is nothing sensitive to clean up.

Because this tool runs entirely in the browser, there is also no signup, no API key, and no rate limit. You can generate a fresh set of test records in under two seconds, every time, without sending a single byte of your schema to anyone.

When do you need fake data?

  • Database seeding — populate a local or staging database with realistic rows so queries return real-shaped results rather than empty tables during development.
  • UI demos and presentations— show stakeholders a data table filled with plausible names and emails rather than "User 1, user1@test.com" placeholders.
  • API fixture files — store a JSON array of fake records next to your tests so network calls can be mocked without connecting to a real backend.
  • Form and validation testing — feed a range of field values through input validation to confirm your regex, length limits, and error messages behave correctly.
  • Load testing baselines — seed a test database with enough rows to profile query performance before real traffic arrives.

The difference between a random data generator (like a UUID Generator or hash tool) and a fake data generator is realism. Random bytes pass no human inspection; realistic names and emails make a demo feel finished and help QA engineers spot logic errors that only appear when fields contain plausible content.

Popular faker libraries — and where this tool fits

Several mature libraries generate fake data programmatically: Faker.js (JavaScript/TypeScript, the most widely used in web projects), Python Faker (the standard choice in Django and Flask projects), and Bogus (.NET). All three support dozens of locales, hundreds of field types, and complex relational scenarios through code. If you are building a test harness that needs to generate thousands of rows on every CI run, a library is the right choice.

This browser tool covers the most common field types — name, email, phone, address, city, country, company, job title, UUID, and date of birth — without requiring a library installation, a Node.js environment, or any code. It is the right tool when you need a small batch of records right now: for a database seed script, a JSON fixture, a CSV import, or a demo.

How to use the fake data generator

  1. Toggle the fields you want in each row. Available fields include first name, last name, email, phone, street address, city, country, company, job title, UUID, and date of birth.
  2. Set the row count. The tool supports up to 100 rows per generation — enough for most seeding and fixture tasks while keeping the browser responsive.
  3. Choose an output format: JSON (array of objects), CSV (RFC 4180, header row included), or SQL (a single multi-row INSERT statement).
  4. Click Generate. The output appears in the panel below.
  5. Click Copy to put the output on your clipboard, or Download to save it as a file.

Picking the right format: use JSON when building API fixtures or feeding a JavaScript test suite; use CSV when importing into a database via a GUI tool or spreadsheet; use SQL when you want to paste directly into a database console and run the INSERT without any intermediate step. If you need to convert between formats after generating, the JSON ↔ CSV Converter handles that translation.

Example output

JSON (3 rows)

[
  {
    "first_name": "Margaret",
    "last_name": "Okafor",
    "email": "margaret.okafor@gmail.com",
    "phone": "+1-312-555-0174",
    "city": "Chicago",
    "country": "United States"
  },
  {
    "first_name": "Daniel",
    "last_name": "Hewitt",
    "email": "daniel.hewitt@yahoo.com",
    "phone": "+1-415-555-0238",
    "city": "San Francisco",
    "country": "United States"
  },
  {
    "first_name": "Priya",
    "last_name": "Nair",
    "email": "priya.nair@outlook.com",
    "phone": "+1-646-555-0391",
    "city": "New York",
    "country": "United States"
  }
]

CSV (same 3 rows)

first_name,last_name,email,phone,city,country
Margaret,Okafor,margaret.okafor@gmail.com,+1-312-555-0174,Chicago,United States
Daniel,Hewitt,daniel.hewitt@yahoo.com,+1-415-555-0238,San Francisco,United States
Priya,Nair,priya.nair@outlook.com,+1-646-555-0391,New York,United States

SQL INSERT (same 3 rows)

INSERT INTO users (first_name, last_name, email, phone, city, country) VALUES
  ('Margaret', 'Okafor', 'margaret.okafor@gmail.com', '+1-312-555-0174', 'Chicago', 'United States'),
  ('Daniel', 'Hewitt', 'daniel.hewitt@yahoo.com', '+1-415-555-0238', 'San Francisco', 'United States'),
  ('Priya', 'Nair', 'priya.nair@outlook.com', '+1-646-555-0391', 'New York', 'United States');

Technical notes

JSON output is an array of objects, one element per row. Property names are snake_case and match the field labels exactly. The output is valid, parseable JSON you can pass directly to JSON.parse.

CSV output follows RFC 4180. The first row is always a header row. Fields that contain a comma or a double-quote are wrapped in double quotes and internal double-quotes are escaped by doubling them. This format imports cleanly into MySQL, PostgreSQL, SQLite, and most spreadsheet applications.

SQL output is a single multi-row INSERT statement. Column names are snake_case. String values are single-quoted with internal single quotes escaped by doubling. The statement is compatible with MySQL, PostgreSQL, and SQLite syntax — copy it, paste it into your database console, and run it.

Data realism:names are drawn from a curated pool of common English-language first and last names. Email addresses are derived from the generated name plus a common mail domain, producing addresses that look plausible without resembling any real person. Phone numbers follow the North American Numbering Plan format. There is no cross-field referential integrity — city and country are selected independently, so a row may show "Paris" in the city column and "United States" in the country column.

Limits

  • 100 rows maximum per generation. For bulk seeding at tens of thousands of rows, use a library like Faker.js in a Node.js script instead.
  • English-language name pool only. The current release does not include non-English first or last names. International locale support is not planned in this version.
  • No photo or avatar URL generation. If your schema includes a profile image field, use a placeholder service like https://i.pravatar.cc/150 and append the row index.
  • No referential integrity between fields. City, state, country, and ZIP code are each sampled independently. They will not always form a geographically valid combination.
  • No custom field templates. You can toggle the built-in fields on or off, but you cannot define a custom field type or provide your own value pool in this version.

Common mistakes

Using real names or emails as template data.Even if you replace the values before committing, accidentally using a real person's information as a placeholder puts that person at risk if the data leaks during review or is cached in a tool. Use synthetically generated data from the start.

Generating too many rows at once in the browser. The tool caps at 100 rows, which is intentional. If you need 10,000 rows, generate 100, copy the output, and loop a script to multiply it — or switch to Faker.js for bulk generation. Attempting to generate thousands of rows in a synchronous browser task blocks the UI thread.

Forgetting to remove test data before production deployment. Synthetic records that were seeded during development occasionally slip into production databases when staging environments are promoted. Add a migration step or a seed-flag column so test rows can be deleted with a single query before go-live.

Assuming the generated email addresses are deliverable. They follow the user@domain.comformat and will pass most regex validators, but the mailboxes do not exist. Do not use them as addresses to send actual email — bounces will hurt your sending domain's reputation.

Privacy

All data is generated in your browser. No input or output is transmitted to any server, logged, or stored. The tool does not require an account, and there are no cookies tied to what you generate. You can use it safely on an air-gapped machine or behind a corporate firewall.

The generated data does not correspond to any real person. Names, email addresses, phone numbers, and addresses are constructed from synthetic pools. There is no lookup against any registry of real individuals. If a generated record happens to match a real person's details, that is coincidence — not a data leak.

For additional placeholder content needs — paragraphs of dummy copy for a layout, for instance — the Lorem Ipsum Generator produces filler text, and the Password Generator can supply random token-like strings for any credential fields in your test dataset.

Frequently asked questions

Is the fake data truly random?

Each value is drawn at random from a fixed pool of names, cities, domains, and other field-specific lists using Math.random for selection. The output is non-deterministic — you will get a different batch each time you click Generate — but the distribution is constrained by the pool size. It is not cryptographically random in the sense that a UUID is, and it is not suitable for generating security-sensitive values like tokens or keys. Use the Password Generator for those.

Can I use this for load or performance testing?

For small baselines — a few hundred rows to profile index behavior or test query plans — yes. Generate up to 100 rows, copy the SQL output, and paste it into your database. For genuine load tests that require millions of rows, this tool is not the right fit. Use a library like Faker.js inside a Node.js script and stream the output to your database in batches. The browser-based approach is optimized for quick, manual tasks, not bulk automation.

How do I import the CSV into MySQL or PostgreSQL?

In MySQL, use LOAD DATA INFILE with FIELDS TERMINATED BY ',' and IGNORE 1 ROWS to skip the header. In PostgreSQL, use \COPY users FROM 'file.csv' CSV HEADER;. Both commands expect the first row to be the header, which the tool always includes. Make sure the column order in your CREATE TABLE statement matches the column order in the CSV, or use an explicit column list in the COPY command.

What is the difference between this and Faker.js?

Faker.js is a Node.js library you install and call from code. It supports 60+ locales, hundreds of field types (credit card numbers, ISBNs, color names, vehicle information, and much more), and full programmatic control over generation logic. This tool is a no-install browser widget. The trade-off is scope: Faker.js does more; this tool is faster to reach for when you need a small batch of records and do not want to write code. For any project that requires test data as part of a repeatable CI pipeline, add Faker.js as a dev dependency instead.

Is the generated data GDPR-compliant to use in demos?

Yes — synthetic data that does not relate to any identified or identifiable natural person falls outside the scope of GDPR. Article 4 defines personal data as information relating to an identifiable person; data that was never derived from a real individual carries no such risk. You can share screenshots, screen recordings, and demo databases built from this tool's output without a data processing agreement. That said, this is not legal advice — confirm your organization's policy on synthetic data before using it in an external-facing context.

Can I generate fake data for non-English names?

Not in the current version. The name pool is English-language only. If you need names from other locales — French, Japanese, Arabic — install Faker.js and set the appropriate locale. For example: import { faker }from '@faker-js/faker/locale/fr'. Locale support for the browser tool may be added in a future release.