By reading this, you'll learn about key 4.0.1, answer key questions about new PCI requirements, and lay out what you need to do to make adjustments and be prepared for 4.0.1.
As of March 31, 2025, PCI DSS v4.0.1 is live and with it comes a slew of new, updated, and altered requirements. In this document, we’ll explore what those changes are, answer key questions about new PCI requirements, and lay out what you need to do to make adjustments and be prepared for 4.0.1.
While every single requirement won’t be covered here, those that have seen more impactful shifts and require you to make adjustments will be reviewed. Let’s explore what’s new in 4.0.1!
Starting off with 3.4.2, this requirement deals with the relocation of PAN data, if you are remotely connected to a system with access to full PAN. Technical controls need to be in place to prevent the copying of PAN data for all users that do not have the authorization to relocate this data. Users with a documented business need and legitimate authorization to copy PAN data are unaffected. But for anyone else using the system, you’ll need to find some way to make sure that they can't. That may mean reengineering a business practice or a business process. It’s always a good idea to limit access to cardholder data.
The simple answer may be just to reengineer a business process or an application so that individuals don't even have access to the data that they could potentially copy if they're remoting in. Otherwise, you're going to have to come up with a technical control of some kind, and that may not be supported by your current software. There are techniques for disabling right-click and other context menu options in applications, you can also restrict copy and paste actions via JavaScript, etc. To prevent users from copying data to a USB drive, there are also many ways to disable USB ports on computers. There may not be high-end technology built directly into a VPN or other remote-access tool so be sure to take some time to think about this new requirement and see how it may apply to your systems and processes.
For 3.5.1.1, the focus is on keyed hashing. With this update, if you are using hashing methods to protect card data you’ll most likely need to implement a new method of hashing. This will be a different type of hashing algorithm, one that requires a protected key to work, much like a key used in an encryption algorithm.
Most companies are aware that keyed hashing is going to be an important element to adopt, but what they might not have paid attention to is the fact that they will now have to manage that hash algorithm’s key. The other later requirements, 3.6 and 3.7, now start to come into play. Not only does it have to be a strong key, it has to be well managed using techniques like split knowledge and dual control.
Do your research and make sure that your algorithm really does support a key as described in the PCI DSS requirements in 3.5.1.1. Doing cryptography is not always the easiest thing. It’s recommended to use a library that has a validated random number generator and the ability to generate keys, it's generally better and more secure than trying to write your own algorithm.
As of March 31, 2025, you can no longer use full disk encryption as a method for protecting card data.
In a common scenario, there have been individuals in some systems where it was challenging to implement field-level encryption in the database. They relied on disk-level encryption, which only protects data when the machine is off. If you're relying on it to protect your server, that is no longer allowed. You'll have to implement a better encryption solution. Most companies are not using that technology any more, but if you are, changes will have to be made.
This requirement is focused on phishing protection and training.
Phishing is one of the most common attacks. Hackers are exploiting the human element of organizations, which has always been the weak link in any system. Companies will most likely need to implement a form of technology on an email server to detect and ideally prevent phishing attacks via email. An assessor’s focus will be to ensure some form of phishing system is in place and asking the following questions:
Additionally, if the phishing doesn't come from email, it may come from a phone, social media, etc. There needs to be some training element in phishing protection to help your team understand and identify who's trying to contact them, and remember that if it sounds suspicious, it might be. Determine the best ways that you can confirm identity, or go check with IT or the person the communication seems to be coming from.
The typical tactics used in phishing communication is impersonation, trying to give a sense of urgency, and trying to apply that subtle pressure to get you to react in the moment, instead of following your proper procedures. Train employees to spot that, to follow the right process, and to reinforce that they won't be punished if they follow that process.
While there might be controls on the email server, it may not catch everything. That's why it's important to have a training program in place.
6.3.2 deals with bespoke software. Bespoke means custom software that you've written yourself or someone else has written specifically for you.
The focus on this requirement is, if you've asked somebody to write software for you, you need to get a software bill of materials. This document should be a detailed outline of all third-party software, libraries, and elements that are used in the software that you have. If you're doing it yourself, you are responsible for creating that because vulnerabilities can also arise in those included elements that need to be tracked. If you don't know what’s in your software, it could be vulnerable to a specific attack, and you’d never patch it because you weren’t aware of any third-party vulnerabilities. Ultimately, this requirement is trying to get companies to track all the software being used in card data processing, including those third-party packages and libraries.
As an additional documentation process, this important step may be new to a development team. This requirement forces companies to keep better track of those details. Document carefully and make processes traceable so assessors can see some evidence that you're doing this important task.
There are various ways to do that; assessors are simply looking for some evidence that says you have a process for looking at these third-party libraries. Documentation might look like answers to the following questions:
This requirement is about web-application firewalls. This includes firewall hardware, software, or even a third-party cloud service that sits in front of your web application and watches the traffic coming in, ensuring that it's not identified as something obviously malicious.
Previously, you could either do some sort of manual-code reviews or an automated-code review in place of a WAF. But now with 6.4.2, the manual-code review is no longer an option. You have to do a web application firewall of some kind. It's a pervasive technology that you simply need to have.
This is one of the most vital and heavily discussed requirements since the PCI DSS update and it covers the tracking of scripts used on your web application payment pages.
6.4.3 is asking, “Do you know what scripts are being added to your payment page?” It’s becoming more and more common for small merchants to be targeted for data skimming during checkout. This is a requirement that can be one of the more difficult ones to meet, however, there have been almost two years to work on this. There are many solutions that are available and some are still being built, improved, and rolled out in the industry by service providers.
To be compliant here, you have to know each script on your payment pages, where they came from, who authorized their use, and it needs to be documented and tracked. Keep in mind that some pages can have large numbers of third-party scripts running on them. This requirement was added by the council to help people be aware of how many scripts are in use on payment pages or referring payment pages.
Even if you only have a handful of scripts on your pages, hackers can still inject things dynamically on your page that aren't there while it’s static.
Fortunately, there are many tools out there like SecurityMetrics’ Shopping Cart Monitor that both meet requirement 6.4.3 and report on suspicious scripts on payment pages directly to you. Otherwise, it would require a manual review of the code, which can be difficult and painstaking due to the number of scripts and how scripts can be dynamically included at any time during the payment process. It’s vital to know everything that gets loaded in during the actual payment process.
6.4.3 can feel overwhelming and technical, but getting the right tools to help is important, especially for small merchants who don't know what a script is necessarily.
This centers on passwords, specifically their strength and length, which is fairly straightforward.
The requirement states if you're using passwords as part of your authentication process, then they now have to be at least twelve characters with a level of required complexity. Previously, an eight-character password was the standard, but now it's going to have to be bigger.
8.3.10.1 is ultimately pointed at service providers. If you give your customers access to your systems, then this new requirement is based upon you changing key passwords.
As a service provider, if you give one of your clients access to your application to view their transactions, for example, you’re now required to change those passwords on a 90-day basis or have tools in place to dynamically analyze the request and grant access based on a number of other factors (i.e. location, time, device, etc.), not just a password. The reason for this inclusion is due to a rise of attacks stemming from the compromise of access passwords that exist for long periods of time.
In these kinds of attacks and breaches, it's not the strength of security measures placed around service provider systems that’s the problem, it’s a password choice made by a client that might be used to then attack a service provider's application from an “inside” perspective. Therefore PCI DSS 4.0 is now asking service providers to force clients to have better password hygiene or the service provider can choose to add dynamic analysis tools to enhance the security of a client chosen password.
This requirement isn’t a difficult one, but it's something you’ll be glad you have in place.
8.4.2 is many years in the making, dealing with multifactor authentication and how it needs to be applied. Previously, you were required to use multifactor authentication when accessing your network and systems in an administrative role from an “outside” perspective (i.e. the Internet). Now, this requirement states that MFA has to be used for all non-console access into the card data environment for any role from any location (inside or out).
Even if your CDE is segmented off and you're accessing the systems from within your own corporate network, you still have to implement MFA to access the CDE if you're not standing at the console.
MFA requirements have been in place for a while, which requires two out of the three things: something you know (password), something you are (biometric), or something you have (passkey). Additionally, they've gone on to add some more definition as to what is considered a good MFA system.
PCI Requirement 8.5.1 concerns the settings and capabilities of an MFA solution as it is implemented.
This requirement will force you to get deep information about the MFA solution you have chosen to use, start with the documentation provided by the vendor and review it to make sure your solution addresses the risk of replay attacks and how your solution defends against them.
Next it will be important to review any configuration settings used during setup to ensure it is configured in accordance with the details contained within 8.5.1. Namely, making sure bypassing the MFA system for all users is not possible unless specifically documented and authorized by management in very rare cases. Also, be sure to confirm that both factors of authentication have to be verified and successful before any access is granted to the network or system.
This may mean you have to learn more about your current MFA solution to see if it can meet the requirements and if not, search for a new solution to implement.
PCI Requirement 8.6.2 is ensuring you can provide evidence to your assessor during the audit around having MFA in place.
It’s fairly common for organizations to store passwords in places like configuration files to get systems running, but the new requirement no longer allows entities to have passwords written in configuration files or text files, specifically clear text passwords.
This may mean you have to recode and rework some of your startup systems in those ways.
Here we’re dealing with system or application accounts. Systems or applications also need to authenticate to various services while running and accounts with passwords used to grant that access. These are not user accounts for interactive login but still need to be protected from misuse.
Typically, it’s either the software or the system that has a password it utilizes as it boots up. Now the requirements state that those have to be changed periodically, which is a significant change.
Many updated requirements want you to have a periodicity in relation to how frequently you need to do something. PCI Requirements now want you to do a Targeted Risk Analysis (TRA) on what frequency you should be doing things like updating passwords in your system, and suggest you document your process to show why you made that choice; whether it’s weekly, monthly, or quarterly to show that it’s secure.
Ultimately, it’s important to prove to both your assessor and yourself that you’ve measured and considered the risk by utilizing a TRA.
In 4.0 in general, the focus on TRA’s and periodicity has been added to have you document your risk analysis in a more defined way across the board. Learning how to define what your processes are, and explaining them and settings you have chosen to your assessor is essential.
For 10.4.1.1, you’ll need to have an automated mechanism used for doing your log reviews. This is what's typically called the SIEM or Security incident and Event Management.
Using a SIEM tool is now mandated. Manual reviews of logs is no longer a viable option, and it’s likely that not many companies are doing that. Thankfully, there are multiple options to meet this requirement. You can outsource some of it or buy your own software and configure it to alert you. You additionally need to store them according to requirement specifications: three months available for immediate review, one year total.
You can use multiple methods as long as you can explain to your auditor what the process is for getting alerts from your automated system that keeps track of required logs and having a process for someone to review those alerts daily and monitor if changes need to occur in the configuration of those monitoring tools.
10.7.2 is about critical control systems, detecting if they fail, and who to notify when they do.
10.7.2 now supersedes 10.7.1.
Previously, service providers had to monitor everything from firewalls and quarterly scans to IDS and IPS. The change here in 10.7.2 is that it now applies to everybody, not just service providers. You have to monitor the health and functioning of the systems mentioned in the requirement.
Overall, this is raising the bar for everybody to better protect themselves.
This section concerns internal VA scans. The shift is now toward authentication.
Whichever software you're using to do your quarterly internal vulnerability scans, it has to be able to authenticate to exposed services or software detected during a scan. This can help you know what a bad actor might be able to see if they get inside your network and start looking around. You want to be able to defend your systems better so checking what would happen if the attacker gleans authentication information inside your network helps you know what other vulnerabilities lie behind those authenticated interfaces. Overall, it’s an elevated way to do the internal scans.
This requirement is only for service providers and is about monitoring outbound covert malware communication channels, which can be a little confusing.
As the defenders and good guys in cybersecurity up their game, the attackers and bad guys do as well. Hackers are figuring out new ways to get information in and out of companies, and remotely control systems. They often use a port that you typically allow out through the firewalls, like DNS, but they'll masquerade as DNS packets going out when in reality, it's not.
That's a covert malware channel. Your IDS, IPS, or firewall has to be capable of monitoring for that and doing more of a deep packet inspection. When that detection system alerts on a possible issue then that alert has to be dealt with and your assessor will be looking for your process to handle those alerts as well.
Many firewalls already have this, but you might have to pay a little extra to update your licensing on your firewall to get this feature.
11.6.1 is the other half of the e-skimming script monitoring in 6.4.3, which was mentioned earlier.
6.4.3 is making sure you know what all the scripts are and authorized. 11.6.1 is making sure your payment pages and referring payment pages are monitored for malicious scripts on a cadence that can be defined by a TRA.
It’s up to you to determine how often you want to monitor your pages. It’s impacted by things like how much your web software changes or if you have a large amount of transactions, you need to determine the risks and check for these malicious scripts as necessary. By default, the requirement suggests checking at least once every 7 days. Bad guys can get malicious scripts into systems, even the ones that are protected with CSP and SRI.
CSP and SRI can be part of a solution for 11.6.1, but they can't meet the whole requirement.
Many tools exist that address this requirement like SecurityMetrics’ Shopping Cart Monitor. This issue is one of the new PCI DSS 4.0 changes that is often most worried about, most talked about, and most debated of the requirements.
Several of the common e-skimming compromises only happen after you complete a purchase or after a customer enters their CVV number. That's when all the bad scripts can appear. Since it's a difficult problem, a professional solution may be the best option to ensure this gets squared away.
12.3.4 is about monitoring hardware and software technologies to ensure they're all getting patches.
This requirement aims to up the game of certain security steps found in requirement 6, which focuses on being aware of vulnerabilities.
You have to know what's running in your environment. Not only the hardware and software that you have running, but any software that may have been created for you or software you may have purchased from somewhere else. You need to match that against the vulnerability information that comes out on a regular basis. In addition, this requirement deals with any “end of life” software or OS situations that might soon exist in your environment. You need to have a plan to deal with soon-to-be-unsupported software as well.
It’s vital to know what's in your environment in a better, deeper way than you may have in the past.
Additionally, having a documentable, auditable process around that is essential for your auditors. Again, your assessors will be checking that you're getting all your patches taken care of. You should never assume that it's happening automatically.
This requirement is about scope validation and documentation. You now have to have documentation showing that you have reviewed your scope annually. It’s likely things that you've already been doing, but now you need documented proof.
If any major change takes place in your network, data flows, storage locations, third-party connections, etc., you must revalidate your scope. For example, if you modify a payment data pathway, or you change your firewalls, you need to redo your scope validation. Data discovery tools are often used during scoping exercises to ensure unsecured card data does not show up in unsuspected places. You may want to look into the SecurityMetrics PANScan tool for this purpose.
For 12.10.7, you need to run your incident response plan if you ever find any unencrypted PAN where it's not supposed to be. Your incident response plan needs to do a few crucial things like making sure that you understand how long the unencrypted PAN has been there, discovering what was the root cause of it, knowing how you can fix it, providing analysis, fixing it, and providing insights on what lessons were learned.
This requirement also implies that you need to be looking for unencrypted card data, to some degree, and trying to determine if you need to modify your process.
Recently, the PCI council made a change to the SAQ A, which removed PCI DSS requirements 6.4.3 and 11.6.1 from the SAQ A validation list, if you meet certain eligibility requirements. The eligibility requirements essentially want to make sure that your website is not susceptible to malicious scripts and that any included payment elements on your page or your referring page are not susceptible to attacks.
To help with all this, the Council published an FAQ on the SAQ A updates.
Effectively, you can meet the eligibility statement in one of two ways: the first way is to essentially meet 6.4.3 and 11.6.1.
The second way would be to work with a third-party service provider that takes over responsibility for meeting those requirements for you. Before selecting a provider to meet that for you, you may want to receive a responsibility matrix or some sort of contract document from the third-party service provider outlining how they’ll meet this eligibility statement for SAQ A for you. That third-party service provider is essentially taking on the risk of those requirements instead of you.
Most QSA or assessor firms have a template of policies and procedures that they can get to you as a starting point to get a leg up on 4.0.1. However, you still have to do the work to modify them for yourself.
Start by writing your process down. Document what your network security control rule set is. There's no generic policy that knows what that answer is, as it is dependent on your business.
But you don't have to start with a blank piece of paper and wonder where to start.
There is a lot of help available to you.
The simple answer? You probably won’t want to do it.
It’s very challenging, and there are rarely any good reasons for why it needs to be done.
While a large entity might want to do this because they have the resources to do so, meeting the standard requirement is usually much easier.
SecurityMetrics has a PCI guide that is updated and released annually, especially for PCI 4.0, as it reviews the whole standard in straightforward terms. That's a great place to start. It’s developed by a team of experts working each day with real-world experiences.
Best of all, it’s completely free.
No PCI journey is complete in one day or one week. It requires a lot of energy, but if you invest the time and effort to ensure your organization is PCI compliant, you try and learn what applies to you, and work with experts to answer your questions, you will protect yourself and your customers with the best tools, standards, and elements available.
Download the SecurityMetrics complete PCI guide for free here.