Businesses are increasingly using artificial intelligence (AI) and algorithmic tools to set and adjust prices for consumers. These models often rely on personal data—such as location, browsing history, and demographic information—to tailor prices to individual consumers. This practice, known as “personalized” pricing (sometimes referred to more negatively as “surveillance” pricing), is drawing scrutiny from lawmakers and regulators across the United States over concerns about fairness, transparency, privacy, and potential discrimination.
For consumers, personalized pricing may manifest when a traveler sees higher hotel rates after searching from a high-income ZIP code or when a shopper browsing from a mobile device is offered a different price than one browsing from a desktop. This client alert provides an overview of the legal and regulatory landscape surrounding personalized pricing, highlights key enforcement actions and legislative developments, and outlines compliance considerations for companies and their legal teams.
I. Understanding Personalized Pricing
Algorithmic pricing is an umbrella term encompassing two similar but distinct practices, each with different legal implications. Dynamic pricing adjusts prices in real time based on supply, demand, competitor pricing, or other market conditions. Personalized pricing, the topic of this client alert, is the use of individual consumer personal data to set prices or provide special offers or promotions to consumers.
Personalized pricing has become widely adopted by many companies that sell goods and services. Companies use personalized pricing to make pricing decisions efficiently and competitively by analyzing large volumes of data that retailers have historically relied upon. According to the National Retail Federation, algorithmic pricing tools allow companies to offer prices that enable customer retention, consumer satisfaction, and a competitive edge in a dynamic marketplace. For example, some companies use algorithmic pricing software that analyzes transaction history to offer customers coupons or loyalty rewards for products they purchase regularly.
As personalized pricing becomes more widespread, state legislators and regulators have increasingly scrutinized the practice for a variety of reasons. The rise in the costs of essential goods and services—from housing to healthcare—is one motivating factor. Policymakers are focused on whether personalized pricing practices could result in higher prices for consumers who can least afford them, reflecting concerns that such practices may undermine affordability and fairness in consumer markets. Some state officials, including in New York1 and Maryland,2 have emphasized the risk that consumers may unknowingly pay different prices for the same goods or services based on an algorithm’s opaque assessment of their personal data. Other state officials, including in California,3 have expressed concern that personalized pricing practices implicate broader concerns regarding consumer privacy.
II. Legal and Regulatory Landscape
Algorithmic pricing’s growing popularity has led to a sharp uptick in legislative and enforcement action, as lawmakers and regulators seek to understand and set rules for personalized pricing. On the legislative front, one state (New York) has enacted a law relating to the use of consumer data to set prices. However, in January and February 2026 alone, over 35 algorithmic pricing bills were introduced across the country. This level of activity signals that state lawmakers view targeted regulation of algorithmic pricing as a priority. On the enforcement side, regulators have advanced several consumer protection theories against the use of algorithmic pricing, including allegations that personalized pricing may violate data privacy laws or constitute unfair or deceptive practices in certain circumstances.
a. Federal Regulatory and Legislative Activity
The Federal Trade Commission (FTC) undertook a major study of surveillance pricing in 2024–25 using its 6(b) investigative authority. In January 2025, the FTC issued a preliminary report indicating that companies were using data elements such as a consumer’s precise location, browsing history, and demographic traits to charge different prices for the same goods. For example, the FTC found instances of online retailers offering different prices for identical products based on a customer’s skin tone or whether they were a new parent. The agency also opened a public Request for Information seeking comments on consumers’ experiences with these pricing practices. However, after a change in FTC leadership in mid-2025, this public inquiry was halted—the new FTC chair closed the comment period early in January 2025.4
Despite the pause in formal study, the FTC signaled ongoing concern through enforcement investigations. In December 2025, news broke that the FTC had sent a civil investigative demand to Instacart to probe its algorithmic pricing practices. The FTC’s inquiry is reportedly examining whether Instacart’s Eversight pricing algorithm, which lets retailers test different prices on the platform, constitutes an unfair or deceptive practice under federal law.5
Congressional activity around personalized or algorithmic pricing has been relatively limited, but a few notable bills have been introduced (none enacted to date). These include the Stop AI Price Gouging and Wage Fixing Act of 20256 and the One Fair Price Act of 2025,7 both of which would prohibit the use of surveillance-based pricing; the AI Civil Rights Act (reintroduced in December 2025), which would bar discriminatory algorithmic practices and impose audit and notice requirements on companies using AI tools;8 and the Stop Price Gouging in Grocery Stores Act, introduced in 2026, which would prohibit the use of surveillance pricing specifically in grocery stores.9 The legislative efforts broadly indicate bipartisan awareness of the issue—for example, Democratic sponsors pushing consumer protection angles and some Republican interest framed around price gouging, market fairness, and transparency. On March 5, 2026, the House Committee on Oversight and Government Reform Chairman James Comer (R-Ky.) launched an investigation into the use of AI to analyze consumer data in setting prices.10 Comer sent letters to several travel accommodation companies, including rideshare services and booking websites, alleging that the rise of surveillance pricing algorithms allowed companies to weaponize personal data to improve margins at the expense of price transparency.
b. State Laws and Legislative Activity
US states have been highly proactive since 2025 in addressing personalized pricing, leading to a patchwork of emerging laws and proposals.
New York is currently the only state with a personalized pricing law in effect. The Algorithmic Pricing Disclosure Act, which took effect November 10, 2025, requires any business that uses personal consumer data to set a price to provide a “clear and conspicuous” disclosure at the point of sale, stating: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” Businesses must print or display this notice whenever a personalized price is shown. New York’s approach is a transparency mandate—it does not ban price personalization but instead aims to ensure that consumers know when their data is influencing prices. As noted below, early enforcement has already begun.
Many other states have followed New York’s lead, introducing their own bills on algorithmic or personalized pricing. Noteworthy patterns are:
- Broad Bans vs. Disclosure Requirements: The majority of these 2025–26 state bills seek to prohibit the use of personal data in price setting, effectively outlawing personalized pricing. For example, bills introduced in 2026 in states such as Arizona, Florida, Hawaii, Maryland, Nebraska, New York, Tennessee, and Washington would make it unlawful to charge individualized prices based on consumer data. In contrast, a few states, including Illinois, are exploring transparency-focused bills similar to New York’s.
- Loyalty and Promotions Exceptions: Notably, even the strictest bills seeking to ban surveillance pricing include carve-outs to permit common pricing practices that consumers expect. For instance, many 2026 proposals explicitly exempt loyalty programs, rewards, coupons, or temporary sales from their definition of prohibited personalized pricing. As a result, retailers can still offer individualized discounts or promotions to rewards members based on purchase history, which are framed as opt-in benefits rather than hidden price discrimination.
- Essential Goods and “Predatory” Pricing: Several states are zeroing in on essential consumer goods (such as groceries, fuel, and medicine) when regulating algorithmic pricing. Lawmakers in states such as Pennsylvania and Maryland have expressed alarm over dynamic or personalized pricing that could raise costs for necessities. For example, Maryland’s Protection from Predatory Pricing Act (governor-backed Senate Bill 387/HB 895, introduced in January 2026) would ban any use of dynamic pricing in grocery stores and additionally prohibit the use of surveillance data to set individualized grocery prices.
c. Enforcement Trends
In parallel with legislation, enforcement agencies have ramped up activities targeting algorithmic and personalized pricing practices:
- California: In late January 2026, California Attorney General Rob Bonta announced an investigative probe into surveillance pricing in the retail, grocery, and hotel sectors, examining potential violations of the California Consumer Privacy Act (CCPA).11 Specifically, Bonta expressed concern that personalized pricing practices may trigger obligations under—or even violate—the CCPA’s “purpose limitation principle,” which limits a business’s use of personal information to purposes consistent with the reasonable expectations of consumers.12 This is a novel enforcement angle: Instead of treating personalized pricing as a pricing or antitrust issue, California is scrutinizing it as a data privacy compliance matter. California’s enforcement posture underscores a broader point: Personalized pricing models not only implicate specific algorithmic pricing laws but also state privacy regimes when they rely on personal data or other inferred characteristics. In addition to the CCPA, these risks are amplified by the state’s newly effective Automated Decision-Making Technology regulations, which impose new notice, opt-out, access, and risk assessment obligations on businesses that use automated systems to make “significant decisions” about consumers based on personal information.13
- New York: In January 2026, New York Attorney General Letitia James sent a civil investigative demand letter to Instacart, examining whether the company complied with New York’s algorithmic pricing disclosure law.14 James’s letter follows a study finding that Instacart charged different prices for the same product—with some shoppers seeing prices up to 23 percent higher for identical items in the same store at the same time—and seeks detailed information about Instacart’s pricing practices and its compliance with New York’s algorithmic pricing disclosure law.15 The state also issued a consumer alert warning to New Yorkers about dynamic pricing, advising that personalized pricing entails the use and analysis of consumers’ personal data and providing tips on how to determine whether they are being subjected to personalized pricing.16
Given California and New York’s demonstrated willingness to pursue enforcement actions implicating algorithmic and personalized pricing, businesses implementing—or planning to implement—such tools should view recent activity in those states as part of a broader national trend rather than as isolated developments.
III. Key Legal and Compliance Issues for Companies Using Personalized Pricing
Amid this heightened regulatory climate, in-house legal teams should evaluate personalized pricing initiatives with an eye toward the following key issues.
- Transparency and Consumer Protection: Businesses deploying personalized pricing should consider building transparency and consumer choice directly into the resultant offering as a default rather than treating them as afterthoughts. This includes, for example, plain-language disclosures at relevant points regarding the use of a customer’s data to offer individualized pricing, discounts, or product offerings. In so doing, companies should avoid “charging for privacy”; for example, by offering higher prices or reduced access unless consumers agree to personalized pricing. A core principle emerging from laws such as New York’s and many of the proposed bills is that consumers should be informed if personal data is being used to influence pricing. In the e-commerce context, that may mean an on-screen notice near the price in addition to disclosures in privacy policies.
- Data Privacy Compliance: The use of personal information in pricing triggers compliance obligations under the growing patchwork of privacy laws. Under the CCPA, using previously collected data for materially different purposes—such as repurposing a customer’s location or browsing data to adjust prices—could violate the law’s purpose-limitation and data-use provisions if not disclosed and within the scope of the original collection consent. Other state privacy laws (e.g., in Colorado, Virginia, Connecticut) have “profiling” provisions requiring consumer opt-outs or assessments for automated processing that produces legal or similarly significant effects. Legal teams should audit the data feeding into pricing algorithms and confirm that this use is covered by privacy notices and user consents.
- Discrimination and Bias: Personalized pricing creates a risk—even if unintended—that certain consumer demographics consistently receive worse prices. For instance, an algorithm might charge higher prices based on a consumer’s ZIP code or browsing data acts which could be a proxy for race or income. This could draw lawsuits or enforcement under civil rights and equal protection laws, especially if used in credit, housing, employment, or other sensitive contexts. The Consumer Financial Protection Bureau has already warned that algorithmic pricing in lending or insurance could violate fair lending laws if it leads to racial disparities. Companies should proactively test their algorithms for bias and implement regular AI impact assessments focusing on outcomes across demographics.
- Recordkeeping and Vendor Management: Regulators often ask: Can you demonstrate how your algorithm works and why it is fair? To answer yes, companies should maintain robust internal documentation of their pricing models and data flows. Document what data is collected and used, how the algorithm weighs factors, and what measures are in place to prevent unintended outcomes (such as price spikes or biased results). This not only helps with compliance and potential investigations but also aids your own oversight. Additionally, if third-party vendors, consultants, or data brokers are involved in your pricing strategy, use strong contractual controls—require transparency into their methodologies, compliance with privacy and antitrust laws, audit rights, and indemnities for unlawful conduct.
- Monitoring Legal Developments: Finally, designate responsible team members to track emerging laws and regulations on algorithmic pricing in each jurisdiction in which you operate. The landscape is shifting quickly. Some laws may require operational changes (e.g., building a disclosure mechanism or altering how loyalty discounts are offered) and regulatory guidance will also likely evolve.