We're running two processes Nginx and an application inside Docker container. Both the process lifecycles are managed through the Supervisor. I want to understand if I use Docker Log driver for Splunk, will Splunk be able to automatically forward both the process logs indexer.
It is a hard question, as it depends on how you setup the Supervisor.
Let me start from the beginning. Running multiple processes in the same container is an anti-pattern. Try to avoid it as much as possible. Kubernetes, for example, have a great solution for your case, where they can deploy two containers in the same Pod and just setup communication between the containers on the same loopback network interface (127.0.0.1), so for the processes, it will look like they are running in the same container. See https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-v... for details.
Unfortunately, Docker does not have this out of the box, and configuring the same can be problematic.
In that case, all the logs will end up in container stdout/stderr. You can use Docker Logger Drivers to send these logs to Splunk. The problem with this approach - all the logs, from supervisord, from nginx and all other processes, will end up in container stdout/stderr, it could be hard to distinguish them.
Another approach is to have a data volume for the application logs. Attach the same volume to the Splunk Universal Forwarder or another process (like our collector https://www.outcoldsolutions.com) and forward these logs as you usually do. Don't forget to set the log rotation.