All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, I'm currently building a dashboard and need to display two dates, one being today's date and the other being the previous working day. I use the query below for today's date  <query> ... See more...
Hi there, I'm currently building a dashboard and need to display two dates, one being today's date and the other being the previous working day. I use the query below for today's date  <query> index=main | head 1 | eval Date = strftime(_time,"%d/%m/%Y") | fields Date</query> Could you help me with the query to display the previous working day. Many thanks! Janine
I have a simple search which is satisfaction_date=0 OR close_date=0 AND status=8 in the previous month. I now have a requirement where users want to see (last 30 days) where those records are now tag... See more...
I have a simple search which is satisfaction_date=0 OR close_date=0 AND status=8 in the previous month. I now have a requirement where users want to see (last 30 days) where those records are now tagged with a different status. The unique identifier with each record is a proposal_id.   i.e in October proposal vdutta1 had a satisfaction date as 0 and status as 8. Proposal vdutta1 now has a satisfaction date as 0 and status as 6 so this record should be shown.   Can you help?
Hello, I am facing issue while using singleview with js. In splunk I use js to create few singleviews, few of them are working which has only one trelli but the one which has multi trellis in a pan... See more...
Hello, I am facing issue while using singleview with js. In splunk I use js to create few singleviews, few of them are working which has only one trelli but the one which has multi trellis in a panel are not at all displaying the panels, it either on queue or just waiting for data. Could anyone please help...
Hi, I was looking for datasets mentioned in DSDL, but I dont find them with container, do we have any place to download the sample data related to notbooks and can run the dsdl examples.   They h... See more...
Hi, I was looking for datasets mentioned in DSDL, but I dont find them with container, do we have any place to download the sample data related to notbooks and can run the dsdl examples.   They have mentioned those are avaliable in data path in jupyter notebook, but i don't find any data realted to sample data.   If any one have the data please let us know.   thanks  
Hi, I have a question regarding the 0mb Lic; As stateted here in another thread it is mendatory in segmentated networks when DS cannot reach LM. This is the case for my customer in a critical infr... See more...
Hi, I have a question regarding the 0mb Lic; As stateted here in another thread it is mendatory in segmentated networks when DS cannot reach LM. This is the case for my customer in a critical infrastructure environment. Now this customer has several zones, running multiple DS / HF (filtering) inside different zones. The customer does not want to connect all of the DS/HF to LM, but Data is Forwarded to a central Indexercluster.   Lately we saw that Splunk is not allowing to install the same 0mb lic on multiple Servers. Does this mean I have to request a separate 0 mb lic for each of the DS/HFs? The error reads: Peer xxx has the same license installed as peer yyy. Peer xxx is using license master self, and peer yyy is using license master self. Please fix this issue in 72 hours, otherwise peer will be disabled. What is your opinion on this, how we can fix the issue? BR Markus
Hello Splunkers, I would like to have a page show when an user clicks an app icon, instead of going to a default dashboard, it shows a list of dashboards available (with links to open) and inform... See more...
Hello Splunkers, I would like to have a page show when an user clicks an app icon, instead of going to a default dashboard, it shows a list of dashboards available (with links to open) and information about the app and or dashboards. I have searched around and have not found anything. I would like something similar to an "index.html" page, etc. Thanks for a great source of info on splunk.   eholz1
I keep getting Client is not authenticated Error. I wondered if there was an error occurring internally, so I confirmed that the following authentication error occurred while searching. startup:1... See more...
I keep getting Client is not authenticated Error. I wondered if there was an error occurring internally, so I confirmed that the following authentication error occurred while searching. startup:116 - Unable to read in product version information; isSessionKeyDefined=False error=[HTTP 401] Client is not authenticated Host: It seems to be occurring in both IDX and SH, and it is confirmed that the error occurred on 11/12. Is there any way to check which app is causing it?  
hi as you can see I use a relative time in my search in order to filter events on today between 7h and 19h   earliest=@d+7h latest=@d+19h    Now I would like to be able to link this relati... See more...
hi as you can see I use a relative time in my search in order to filter events on today between 7h and 19h   earliest=@d+7h latest=@d+19h    Now I would like to be able to link this relative time with my timepicher in order to change the period slot, for example I need to display events on the last 7 days between 7h and 19 or on the last 24h between 7h and 19h is it possible to do that? thanks     <form> <label>CAP</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <search> <query>`index_mes` sourcetype=web_request earliest=@d+7h latest=@d+19h    
Hello, we tried to enable TLS validation with Splunk 9.0.2 as described in the Splunk documentation. Unfortunately, this caused our distributed Splunk system consisting of index cluster and searchh... See more...
Hello, we tried to enable TLS validation with Splunk 9.0.2 as described in the Splunk documentation. Unfortunately, this caused our distributed Splunk system consisting of index cluster and searchhead cluster to stop working. Specifically, searches could no longer be performed. We discovered the following messages in the search head log: 11-06-2022 23:59:59.839 +0100 ERROR DistributedPeerManagerHeartbeat [95344 DistributedPeerMonitorThread] - Send failure while pushing public key to search peer = https://10.10.10. 10:8089 , error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. This makes us wonder, because actually the connection for the TLS connection should be established over fqdn and not over an ip address. When establishing a connection over the IP, it is also logical that the TLS name check against the host name fails, since only the FQDN and no IP address is stored in the TLS certificate. We therefore wondered why the search head addresses our index peer 10.10.10.10 via the IP and not the fqdn. The Splunk documentation states that the search head gets the list of index peers from the cluster manager. However, in the cluster manager, our index peers also report by name, at least that is what the output of the cluster master suggests: >> splunk/bin/splunk show cluster-status WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Replication factor met Search factor met All data is searchable Indexing Ready YES HA Mode: Disabled indexpeer01.bla.fasel 132BC25B-7774-40D9-AAED-22F9795C8E3F site2 Searchable YES Status Up Bucket Count=2412 indexpeer02.bla.fasel 9B5E23F6-3D53-4AAF-805F-DEDF3ACF9D87 site2 Searchable YES Status Up Bucket Count=2467 indexpeer03.bla.fasel CCBC4D24-025E-45FB-A68D-9C1A14219D3F site1 Searchable YES Status Up Bucket Count=2384 indexpeer04.bla.fasel D649A1C7-9E86-4E48-9B47-144C70029C15 site1 Searchable YES Status Up Bucket Count=2475 On our search heads, our search peers are listed by IP-Adress in the attribute PeerURI instead of the fqdn. How can we change it to the fqdn? Do other users have the problem as well? Or does TLS validation work even though IP addresses are listed there? Thanks!
Hi Team, We are planning to monitor microsoft 365 and microsoft teams using Appdynamics, could you please let us know how to set up the monitoring. Thanks Kamal Rath
Hi all I would like to include the start and end date of my search in the email subject. For example, 'The results from 2022-11-01 to 2022-11-11'. I tried the email tokens $job.earliestTime$ and $j... See more...
Hi all I would like to include the start and end date of my search in the email subject. For example, 'The results from 2022-11-01 to 2022-11-11'. I tried the email tokens $job.earliestTime$ and $job.latestTime$ but they also give me time which just obscures the title. Is there any way to retrieve just the dates? Any help much appreciated. Cheers Tom
i've followed the documentation and also some examples on here but for some reason I cant seem to get these to extract here is an example of the log xxx localhost 9997 8003 test test endRequest 2... See more...
i've followed the documentation and also some examples on here but for some reason I cant seem to get these to extract here is an example of the log xxx localhost 9997 8003 test test endRequest 2266 2022-11-17T08:08:06.617 2022-11-17T08:08:06.640 23 0 - OK - - DESC EXTENDED VIEW test_data_imp DESC - Denodo-Scheduler JDBC 127.0.0.1 - - the props are as follows [denodo-vdp-queries] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true REPORT-denodo-vdp-queries-fields = REPORT-denodo-vdp-queries-fields the transforms are as follows [REPORT-denodo-vdp-queries-fields] DELIMS = "\t" FIELDS = "server_name","host","port","id","database","username","notification_type","sessionID","start_time","end_time","duration","waiting_time","num_rows","state","completed","cache","query","request_type","elements","user_agent","access_interface","client_ip","transaction_id","web_service_name"   i've pushed the app to the forwarders that sending in the data and its in the right sourcetype, i've also pushed the app across the SH cluster, however none of the fields are extracted, am i missing a step?  
Hi Folks; Has anyone had any luck with the new built in "Token Refresh Check" alert that comes with the CrowdStrike Falcon Event Streams TA (version 2.0.9+). This is now part of the TA to restart in... See more...
Hi Folks; Has anyone had any luck with the new built in "Token Refresh Check" alert that comes with the CrowdStrike Falcon Event Streams TA (version 2.0.9+). This is now part of the TA to restart inputs if they become blocked / unstable (less than 2 events in an hour). We can prove the alert is triggering as we are getting emailed alerts but it doesn't seem to be restarting the inputs if no events are seen in the timeframe, so were having to still manually disable / enable the inputs. As far as we can tell everything is configured correctly. Anyone have any luck with the alert? Cheers
Hi, Below is an extract of the log data that I'd like to present in my dashboard. The purpose is to get a display of the total additions and total errors for each file - Activity.txt and Activit_XY... See more...
Hi, Below is an extract of the log data that I'd like to present in my dashboard. The purpose is to get a display of the total additions and total errors for each file - Activity.txt and Activit_XYZ.txt. 11/4/2022 7:30:00 AM Processing Task t1. Searching for D:\Box\FIL\Import\Activity\*.txt 11/4/2022 7:30:00 AM Processing D:\Box\FIL\Import\Activity\Activity.txt 11/4/2022 7:30:00 AM Deleted D:\Box\FIL\Import\Activity\Activity.txt 11/4/2022 7:30:00 AM Total Attempted Add's: 7 11/4/2022 7:30:00 AM Total Additions: 7 11/4/2022 7:30:00 AM Total Errors during Add: 0 11/4/2022 7:30:00 AM Last Transaction: 0 11/4/2022 7:30:00 AM Processing D:\Box\FIL\Import\Activity\Activity_XYZ.txt 11/4/2022 7:30:00 AM Deleted D:\Box\FIL\Import\Activity\Activity_XYZ.txt 11/4/2022 7:30:00 AM Total Attempted Add's: 17 11/4/2022 7:30:00 AM Total Additions: 17 11/4/2022 7:30:00 AM Total Errors during Add: 0 11/4/2022 7:30:00 AM Last Transaction: 0 I've created a script but this only displays the first instance of Total Additions and Total Errors, could you help me to display the second instance of the data. To display Total Additions I use the script below; host="VMXXX12" source="D:\\Box\\FIL\\Logs\\Pbsa\\Import Activity.log" "Total Additions: " | rex "Total Additions: \s*(?<Total_Additions>.+)\s*" | fields Total_Additions | head 1 | eval range=if(Total_Additions=="0", "severe", "low") To display Total Errors I use the script below; host="VMXXX12" source="D:\\Box\\FIL\\Logs\\Pbsa\\Import Activity.log" "Total Errors during Add: " | rex "Total Errors during Add: \s*(?<Total_Error>.+)\s*" | fields Total_Error | head 1 | eval range=if(Total_Error=="0", "low", "severe") Thanks!
We have the add on installed, is there any way to exclude a specific types of events from indexing ?
Hi Team, I am new here and would like to find a way to tackle this problem. I have structured json events that I am able to push to http event collector and create dashboards. However, if I save the... See more...
Hi Team, I am new here and would like to find a way to tackle this problem. I have structured json events that I am able to push to http event collector and create dashboards. However, if I save the same json event data to a logfile and use the forwarder then Splunk is unable to extract the fields.  My sample json event is below.  {"time":1668673601179, "host":"SAG-13X8573", "event": {"correlationid":"11223361", "name":"API Start", "apiName":"StatementsAPI", "apiOperation":"getStatements", "method":"GET", "requestHeaders":   {"Accept":"application/json",   "Content-Type":"application/json"},   "pathParams":   {"customerID":"11223344"},  "esbReqHeaders": {"Accept":"application/json"} } } if I post this to http event collector I am able to see the fields correctly like below.  If I save the same json data to a log file and forwarder sends this data to Splunk, it couldn't parse the data properly. All I see is like below.   The event fields are not extracted properly including the timestamp.  Should I format the json data in any other way before writing it to log file? Or any other configurations need to be done to make it work? Pls let me know. Thank you
i am trying to create a custom field like host and source by making changes in atteched  photos of entrypoint.sh and input.conf. when I deployed with this changes it does not create field for me with... See more...
i am trying to create a custom field like host and source by making changes in atteched  photos of entrypoint.sh and input.conf. when I deployed with this changes it does not create field for me with name task . Please check i am on right path. If did i do anything wrong let me know 
Hi Splunkers, I have two lookups where having a common field "values" For example: lookup 1     lookup 2 values           values a                       a b                       e c      ... See more...
Hi Splunkers, I have two lookups where having a common field "values" For example: lookup 1     lookup 2 values           values a                       a b                       e c                       f d                       g I need to compare these two lookups and get the values that are not in the lookup2 (b,c,d). I'm using below query but it's not working. Please help. TIA |inputlookup lookup1.csv |stats count by "values" |search NOT [|inputlookup lookup2.csv |fields "values" | fields - count ]
Hi team  I have created a user and set up capabilities however I haven't checked any delete in capabilities. When I checked with user console able to see the delete option. Please refer to below ... See more...
Hi team  I have created a user and set up capabilities however I haven't checked any delete in capabilities. When I checked with user console able to see the delete option. Please refer to below screenshot. Even I tried unchecking can_delete option for alert with admin access but still it is not working. Please suggest .
Hello! I currently have this eval in a search of mine:   | eval exists=if(like(_raw, "%xa recovery%"), 0, 1)   Is there any way to set the variable exists to 0 until a specific eve... See more...
Hello! I currently have this eval in a search of mine:   | eval exists=if(like(_raw, "%xa recovery%"), 0, 1)   Is there any way to set the variable exists to 0 until a specific event comes up? What I'm trying to accomplish is like this... If event contains(xa recovery) exists=0 until event contains(System READY) then exists=1. Thank you!