My answer will be given independently of development technology.
Your essential question is:
"How do we know who put a code in the production environment?"
Ideally, the system running in the production environment is a closed, well-defined and uniquely identifiable version with at least one version code such as "v.1.0.0". This type of identification exists essentially to allow administrators to know exactly what is running on the server in terms of the functionality implemented.
If your project uses a version control the right way, your work process is briefly as follows:
Developers work on problem fixes and implementation of novelties stemming from specific requests, usually identified by ticket numbers in a change tracking system (which may or may not be integrated with the version control system). Changes to the code are made in a way that is traceable to those requests. For example, when doing commits in Git, developers can indicate in the comment what is the ticket number to which the changes relate.
Someone responsible on the team plans the next version (for example, v.1.0.1) in terms of which fixes and news are going to be delivered. This person charges developers for implementing code and integrators for regression testing, and eventually demands the production (build) of the new version, which will include only the planned changes. This is all done off the production server, because it can be done while the current system is running and serving clients.
After the new version is generated, it is updated on the production server (at some point in time, for example during the night when the system is used less and the interruption to update will generate less disturbance). From the identification of the version (v.1.0.1) and traceability matrix provided by the tools used, and mainly by the process (where the problems are described by a specific ticket and the developers always remember to indicate such tickets in the comments of commits), it is easy to identify which changes have been added (and who made them) to the new system running on the production server. Even more important than that, in case of regressions , it is easier to identify the cause.
In short, you only know who put something on the production server if you have the history of the updates made and future updates are made independently of what is running today.
Using version control on the server is not a problem in itself as long as the areas with which it runs and what is being developed are kept separate. Otherwise, what is being executed (assuming something is interpreted) will be constantly changed and you simply lose control of what features are in use. But from the point of view of performance and security, it does not seem a good idea to use the same server for both tasks. The same machine will be dividing resources for the end customer and developers, and any need to stop (to upgrade their version control software, for example) will impact the availability of the service to customers.
If you do not use any version control system (just FTP, as you say), it is even more important to keep the separate structures for you to have at least some manual control of which features will be in execution. At some point someone responsible will need to stop encoding activities to "bundle" the code into an identifiable version and place it on the production server.