All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for the assist. Much appreciated. I have sc_admin role to my user and previously I was able to create and deploy the apps using the Splunk Web UI. I think, I'll raise a Splunk Support tic... See more...
Thank you for the assist. Much appreciated. I have sc_admin role to my user and previously I was able to create and deploy the apps using the Splunk Web UI. I think, I'll raise a Splunk Support ticket to assess the issue and get the solution for it. Thanks once again.
Hi Team, We have recently started ingesting Apache access and request logs from an application, but the data parsing isn't working as expected. Could you please let me know the field names for thes... See more...
Hi Team, We have recently started ingesting Apache access and request logs from an application, but the data parsing isn't working as expected. Could you please let me know the field names for these events so I can try to extract them manually? Alternatively, do we have any format or add-on available that would enable automatic field extraction? If so, that would also be fine with me. For your information, our Splunk Search Head is hosted in the cloud and managed by Splunk Support. I have provided the log structure for both log sources for reference. Please help to check and update.   Request Logs: [09/Aug/2024:07:50:37 +0000] xx.yyy.zzz.aa TLSv1.2 ABCDE-FGH-IJK256-LMN-SHA123 "GET /share/page/ HTTP/1.1" xxxxx [09/Aug/2024:07:50:37 +0000] xx.yyy.zzz.aa TLSv1.2 xxxxx-xxx-xxx256-xxx-xxx123 "GET /share/page/ HTTP/1.1" - Access Logs: xx.yyy.zzz.aa - - [09/Aug/2024:07:57:00 +0000] "GET /share/page/ HTTP/1.1" 200 xxxxx aaa.bbb.ccc.dd - - [09/Aug/2024:07:56:53 +0000] "GET /share/page/ HTTP/1.1" 200 - Thank you.
Hi, Oki thanks we will try that.
If you have correct permissions to create app it should work after you have fulfilled all needed fields. If it isn't work you should create support ticket to Splunk. They can check what is the issue.
Hello Splunkers!! I am executing below script for backfilling the summary index against my saved search. The script is working fine for date 4th and 6th august but it is not working for 5th of Aug... See more...
Hello Splunkers!! I am executing below script for backfilling the summary index against my saved search. The script is working fine for date 4th and 6th august but it is not working for 5th of Aug. Please help me to suggest me some of the potential reasons why script is not working for 5th Aug, although I have data available to main index from where saved search push data to summary index.   Example of script which I executed so far for 5th Aug. splunk cmd python fill_summary_index.py -app customer -name si_summary_search -et 1693883300 -lt 1693969700 -j 8 -owner admin -auth admin:yuuuyyyxx As I am getting below Warning also for 4th 5th and 6th only.    
You should take panel depends on use based on those servers which you have selected. Something like ... <fieldset ...> <input type="multiselect" ... <change> <condition value="Server A"> ... See more...
You should take panel depends on use based on those servers which you have selected. Something like ... <fieldset ...> <input type="multiselect" ... <change> <condition value="Server A"> <set token="t_US_DS"> .... </fieldset> ... <panel depends="$t_US_DS$"> ..... </panel> I suppose that you will get the idea from above and you can get the details from visualization docs.splunk.com. As you probably have dynamic list of nodes you probably need to do this using lookups with eval in coditions?  
This could be storage performance issue or it could be that your environment is too small for your workload. As said, 1st you should check storage performance and if it’s enough then look from you MC... See more...
This could be storage performance issue or it could be that your environment is too small for your workload. As said, 1st you should check storage performance and if it’s enough then look from you MC (monitoring console) other resource usage etc.
You could create own SHC cluster for HF without any normal SH activity from end user. There is no need to use regular end user SHC, just the functionality is what is needed!
Hi, also running into this issue frequently. Has anyone worked on this or found a solution? Thanks!
Hi @Nraj87 , HA is a continuous open issue for DB-Connect. The easiest solution, as @isoutamo hinted, is to install DB-Connect in a Search Head Cluster, so the Cluster gives you the requested HA fe... See more...
Hi @Nraj87 , HA is a continuous open issue for DB-Connect. The easiest solution, as @isoutamo hinted, is to install DB-Connect in a Search Head Cluster, so the Cluster gives you the requested HA features, but not all the customers want to thave an input processor (like DB-Connect) in front end systems accessed by all the users. I hope that Splunk will design a solution for this as soon as possible, for the moment, there's a request in ideas.splunk.com : https://ideas.splunk.com/ideas/EID-I-85 that you could vote. There are two problems to solve for the HA: the checkpoint, the input enablement. A not automatic workaround is to install DB-Connect on at least two HF and create a scheduled script that makes a KV-Store backup from the main HF and a restore in the secondary one, in this way you align the checkpoint between the two HFs, obviously in case or Disaster you'll have duplicated data for the period from the last alignment. Then you have to manually start the secondary DB-Connect in case of Disaster and stop it when the Disaster period is closed. It's a porkaround not a solution! waiting for the solution from Splunk that's late from many years (vote for it!). Ciao. Giuseppe
Hi @‌Easwar.C, Thank you for posting to the community. Is your jar file being placed under the path :  JRE_HOME>/lib/ext  as indicated in perquisite for Object instance Tracking? Also, When usi... See more...
Hi @‌Easwar.C, Thank you for posting to the community. Is your jar file being placed under the path :  JRE_HOME>/lib/ext  as indicated in perquisite for Object instance Tracking? Also, When using the JDK runtime environment, we need to set the classpath using the  -classpath option for the application. After these setting. restart the JVM. Moreover, it is worth to check whether the user that currently running the JVM has access to read the jar file. Cause this error could be triggered when permission denied. Hope this helps, Martina
Hi Team, I'm working on setting up a dashboard that includes the following EUM Browser metrics: Monthly Active Users Bounce Rate Session Duration Daily Average Active Users Could anyone provi... See more...
Hi Team, I'm working on setting up a dashboard that includes the following EUM Browser metrics: Monthly Active Users Bounce Rate Session Duration Daily Average Active Users Could anyone provide guidance on how to retrieve these metrics and display them on a dashboard? Best regards, Nivedita Kumari
Hi @Alnardo , which type of disks are you using for your Search Head and your Indexers? how many IOPS have your disks? remember that Splunk requires at least 800 IOPS and if you have more performa... See more...
Hi @Alnardo , which type of disks are you using for your Search Head and your Indexers? how many IOPS have your disks? remember that Splunk requires at least 800 IOPS and if you have more performat disks you'll have more performat searches. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Capacity/Referencehardware Ciao. Giuseppe
Hi @Real_captain , the search seems to be correct and you should have results also for the present time, are you sure that you have data for the last day that match the conditions? Anyway, your sol... See more...
Hi @Real_captain , the search seems to be correct and you should have results also for the present time, are you sure that you have data for the last day that match the conditions? Anyway, your solution with append is subjected to the limit of 50,000 results because it's a subsearch. About the graph, you should be ableto plot a graph with your search, see in the Visualization tab or in a panel. Ciao. Giuseppe
Hi @JJE , I'm not interested on your logs, only to the timestamp format! Anyway, check if the timestamp format has the format I described and in this case use the TIME_FORMAT option in props.conf. ... See more...
Hi @JJE , I'm not interested on your logs, only to the timestamp format! Anyway, check if the timestamp format has the format I described and in this case use the TIME_FORMAT option in props.conf. Ciao. Giuseppe
Hi @pavithra , to answer to your question I need more information: filename, path, column separator, sourcetype, index. Anyway, supponing that the file is called "myfile2024-08-09.csv" and t... See more...
Hi @pavithra , to answer to your question I need more information: filename, path, column separator, sourcetype, index. Anyway, supponing that the file is called "myfile2024-08-09.csv" and that the path is "/opt/data/files", you could use these: inputs.conf [monitor:///opt/data/files/myfile*.csv] disabled = 0 index = your_index sourcetype = your_sourcetype host = your_host Then you should also configure props.conf for INDEXED_EXTRACTIONS = CSV. Ciao. Giuseppe  
Thanks @richgalloway  and @isoutamo  for your time , it worked
Hello, Can anyone help me in getting this error resolved ? 2024-08-09 10:50:00,282 DEBUG pid=8956 tid=MainThread file=connectionpool.py:_new_conn:1007 | Starting new HTTPS connection (5): cisco-man... See more...
Hello, Can anyone help me in getting this error resolved ? 2024-08-09 10:50:00,282 DEBUG pid=8956 tid=MainThread file=connectionpool.py:_new_conn:1007 | Starting new HTTPS connection (5): cisco-managed-ap-northeast-2.s3.ap-northeast-2.amazonaws.com:443 2024-08-09 10:50:00,312 DEBUG pid=8956 tid=MainThread file=endpoint.py:_do_get_response:205 | Exception received when sending HTTP request. Traceback (most recent call last): File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connectionpool.py", line 710, in urlopen chunked=chunked, File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connection.py", line 429, in connect tls_in_tls=tls_in_tls, File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/splb001/splunk_fw_teams/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/splb001/splunk_fw_teams/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/splb001/splunk_fw_teams/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)
Hi All,   Please provide conf files ( inputs.conf,props.con,outputs.conf) to index the below format data on daily basis  
Hi Team  Is there any way to create Sankey style tile for a single value , below image explaing abt group value.   Where i would like to break into single like Account locked , Invalid Login in se... See more...
Hi Team  Is there any way to create Sankey style tile for a single value , below image explaing abt group value.   Where i would like to break into single like Account locked , Invalid Login in separate tile