All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I'm researching the best way to have Splunk send an alert event to open a ticket in Salesforce. Looked around the internet not really specific to Salesforce ticketing from Splunk. We al... See more...
Hi all, I'm researching the best way to have Splunk send an alert event to open a ticket in Salesforce. Looked around the internet not really specific to Salesforce ticketing from Splunk. We already have the Salesforce add-on installed to collect data, and as far as I know, it's not sending an alert to open a Salesforce ticket. Any assistance would be appreciated.      
Hi, I have a search which I want to optimise by replace the join command :  index="AAA" sourcetype=BBB | stats count(OK) as OK as TOTAL by Date ID | bin Date span=1d | stats sum(OK)  by Date ID ... See more...
Hi, I have a search which I want to optimise by replace the join command :  index="AAA" sourcetype=BBB | stats count(OK) as OK as TOTAL by Date ID | bin Date span=1d | stats sum(OK)  by Date ID | sort -Date it returns results like this :  Date ID OK 2020-09-30 XXX 123 2020-09-30 YYY 26 2020-09-29 ZZZ 763 2020-09-29 XXX 453   I want to retrieve only the last Date of each day but the only way to do that is by catching the last ID which is based on another timestamp. So I have a second request which retrieve the last ID :  index="AAA" sourcetype=BBB | stats max(Timestamp) as Timestamp by ID | sort  Timestamp desc | head 1 The result is :  ID XXX I use a join command but I would like to know ik there is another way to create the search without the join. Do you have a better solution ? Thanks   
Hi, I have a clustered environment (Search Head Cluster with 1 Forwarder,  3 SHs, and 2 Indexers). I have deployed a custom-built app on the Forwarder and 3 SHs. I have created a KV Store lookup w... See more...
Hi, I have a clustered environment (Search Head Cluster with 1 Forwarder,  3 SHs, and 2 Indexers). I have deployed a custom-built app on the Forwarder and 3 SHs. I have created a KV Store lookup with a custom-built app on the Forwarder and set replicate = true in collections.conf as mentioned in https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usingconfigurationfiles.  I wish to replicate KV Store lookup from forwarder to indexer/SearchHeads but KV Store lookup isn't replicating there. Let me know how to achieve this. Thanks in advance.  
Hi, I have data that contains a field in binary that i can use a lookup table to map the various binary values to a value that makes sense. That works fine: chart count by TPTYPE|lookup ims_tptype_... See more...
Hi, I have data that contains a field in binary that i can use a lookup table to map the various binary values to a value that makes sense. That works fine: chart count by TPTYPE|lookup ims_tptype_lookup TPTYPE OUTPUT trantype AS TPTYPE However rather than a single column for each value over the entire interval I'd like to be able to show the counts per hour which is where I seem to be stumbling. I either get the original binary value or I get some errors in the search.  bin _time span=1h |timechart count by TPTYPE|lookup ims_tptype_lookup TPTYPE OUTPUT trantype AS TPTYPE Does what I want but the lookup doesn't happen in for the chart data, if I move the lookup before the timechart command I get errors. Could anyone point me in the right direction please? Steve
Hello Everyone, I am new to the splunk and this community. I have searched everyone for my problem but i could not figure out what is wrong. Basically i am using base search and post process search ... See more...
Hello Everyone, I am new to the splunk and this community. I have searched everyone for my problem but i could not figure out what is wrong. Basically i am using base search and post process search for a dashboard.  My base search is something like this:     <search id="basesearch1"> <query>index=index1 | fields field1, field2</query> <earliest>-24h@h</earliest> <latest>now</latest> </search>     my second base search that uses first base search:     <search base="basesearch1" id="basesearch2"> <query>search field1=value1</query> </search>     and finally the post process search is:     <search base="basesearch2"> <query>stats count(field1) as count by field2 | sort -count | head 5</query> </search>     When i apply it as a single search query like this there is no problem:     index=index1 | fields field1, field2 | search field1=value1 | stats count(field1) as count by field2 | sort -count | head 5     however, in the dashboard the count numbers does not match with the above search query. I used 2 base searches because in the same dashboard, I need to use basesearch1 and basesearch2 in different panels as well. 
I am running Splunk on Windows Server 2016. I attempted to send Palo Alto logs to Splunk but received the following error, "unconfigured/disabled/deleted index=pan_logs with source = source = udp:515... See more...
I am running Splunk on Windows Server 2016. I attempted to send Palo Alto logs to Splunk but received the following error, "unconfigured/disabled/deleted index=pan_logs with source = source = udp:515 host = host = x.x.x.x I edited the .conf file a number of times and restarted Splunk. I am following the instructions for the Palo Alto app, add-on, and configurations posted under Splunk Documentation. I believe that I need to re-configure or add an additional indexer, but I am not sure exactly where.  Thank you
I have a problem with the logs, they are arriving with a delay of 12 hours or more The information first reaches a syslog server and is forwarded to the indexers When reviewing the logs in the sysl... See more...
I have a problem with the logs, they are arriving with a delay of 12 hours or more The information first reaches a syslog server and is forwarded to the indexers When reviewing the logs in the syslog servers I find that they arrive without problem and with the correct date and time when I go to the indexers or search heads to look at the logs I see that they have a delay of 12 hours or more   With this document I have tried to diagnose the problem but I cannot find the same panels that ask to review the document in the part where it is suggested to check with the command iostat -zx 1 one of the parameters are in the values ​​cataloged as bad https://www.splunk.com/pdfs/technical-briefs/disk-diagnosis-digging-deep-with-monitoring-console-and-more.pdf What else should I check?      
The query below is what is used to detect scanning on a network: | tstats summariesonly=t allow_old_summaries=t dc(All_Traffic.dest_port) as num_dest_port dc(All_Traffic.dest_ip) as num_dest_ip from... See more...
The query below is what is used to detect scanning on a network: | tstats summariesonly=t allow_old_summaries=t dc(All_Traffic.dest_port) as num_dest_port dc(All_Traffic.dest_ip) as num_dest_ip from datamodel=Network_Traffic by All_Traffic.src_ip | rename "All_Traffic.*" as "*" | where num_dest_port > 100 OR num_dest_ip > 100 | sort - num_dest_ip   Unfortunately it detects syslog scanning and causes false positives. Please I need help on how or where to add dedup udp514 to the syntax so it omits the syslog files it detects or an option on how to omit the syslog files. Thanks  
e.g QUERY 1: host=jtcstcxbsswb* source="/usr/IBM/HTTPServer/logs/access*" httpmethod="GET" statuscode="200" loaninfo="/api*" OR Requestinfo="*/" OR sitename="*/LoginAccountUserName" |eval APFields=... See more...
e.g QUERY 1: host=jtcstcxbsswb* source="/usr/IBM/HTTPServer/logs/access*" httpmethod="GET" statuscode="200" loaninfo="/api*" OR Requestinfo="*/" OR sitename="*/LoginAccountUserName" |eval APFields=split(loaninfo,"/") |eval APNumOfFields=mvcount(APFields) |eval AP2ndFromLast=mvindex(APFields,APNumOfFields-2) |eval APLoanNumber=mvindex(APFields,6) |eval APLast=mvindex(APFields,-1) |search APLast="loans" OR APLast="summary" OR APLast="payments" |timechart count(APLast), Avg(cookie) as URT  by APLast   Query 2 :sourcetype=apigee:digit* host=JTCLSGLAPGERT* APIProduct=*-Authenticated-Product |timechart span=5m distinct_count(LoginAccountUserName) i want something like this   host=jtcstcxbsswb* source="/usr/IBM/HTTPServer/logs/access*" httpmethod="GET" statuscode="200" loaninfo="/api*"  |eval APFields=split(loaninfo,"/") |eval APNumOfFields=mvcount(APFields) |eval AP2ndFromLast=mvindex(APFields,APNumOfFields-2) |eval APLoanNumber=mvindex(APFields,6) |eval APLast=mvindex(APFields,-1) |search APLast="loans" OR APLast="summary" OR APLast="payments" |stats count(APLast), Avg(cookie) as URT  by APLast |append [search sourcetype=apigee:digit* host=JTCLSGLAPGERT* APIProduct=*-Authenticated-Product |timechart span=5m distinct_count(LoginAccountUserName) ]  |bin _time|stats count(APLast), Avg(cookie) as URT ,distinct_count(LoginAccountUserName) by APLast I am able to get the data  as  Time  | count(APLAST) | URT | LoginAccountUserName (I see only zero values in LoginAccountUserName) how to fetch the LoginAccountUserName data from 2nd query and list it here.
Hi My query taking too long.. and its    | from datamodel:Intrusion_Detection.IDS_Attacks | where _time>relative_time(now(),"-10s@s") | stats values(tag) as tag,dc(signature) as count by src | w... See more...
Hi My query taking too long.. and its    | from datamodel:Intrusion_Detection.IDS_Attacks | where _time>relative_time(now(),"-10s@s") | stats values(tag) as tag,dc(signature) as count by src | where count>25   And it seems it will never return output any idea?
Have had Splunk Cloud Gateway for a while, tried to upgrade... On the SHC following the app deployment, all shows green and we have access to dashboards on the app. but every 10 min we get an alert... See more...
Have had Splunk Cloud Gateway for a while, tried to upgrade... On the SHC following the app deployment, all shows green and we have access to dashboards on the app. but every 10 min we get an alert  Unable to initialize modular input "drone_mode_subscription_modular_input" defined inside the app "splunk_app_cloudgateway": Introspecting scheme=drone_mode_subscription_modular_input: script running failed (exited with code 1) not really sure what to make of it. tried restarting, disabling, etc..
Hi ,  I would like to ask what is the fastest way to list all tags on a certain index. We do have millions of data on an index: "index_test1" , and i want to get all the tags on this index. | sta... See more...
Hi ,  I would like to ask what is the fastest way to list all tags on a certain index. We do have millions of data on an index: "index_test1" , and i want to get all the tags on this index. | stats values(tag) as tag - 280.483 seconds | fieldsummary maxvals=10 tag -   200.979 seconds Is there faster way to achieve it than listed above ?
In Splunk ITSI's episode review, how can I remove the private filters defined by people who have left?  I want to do this because the names of those filters can not be re-used without removing them.
How to upgrade Splunk enterprise version using tar method? Can someone guide me through the steps or documentation?
Hi @gljiva   (and others), I'm situated in Scandinavia, where we no one uses the US way of showing numbers ie: "1,234,567.89", here we use "1.234.567,89" and "1 234 567,89". In other words the lo... See more...
Hi @gljiva   (and others), I'm situated in Scandinavia, where we no one uses the US way of showing numbers ie: "1,234,567.89", here we use "1.234.567,89" and "1 234 567,89". In other words the locale here are ie. da_DK, sv_SE, fi_FI etc. However we have a lot in common with UK "en_GB", but numbers still don't come right, which is no good for our consumers, looking at numbers less than 1000, as they would not know instinctly how to interpret the number, like what is: "1.234" and "1,234"? One can't know for sure without  knowing which locale is used. So I need to know how and where to change this in Splunk - and let me tell right away, that I've already read a lot on this subject, and seen many notes on Splunk (and other places) that this is doable, but till now no real examples of how and where in the code to change this so it works. It's NOT enough to copy a couple of files to a new sub folder called ie: "da_DK" or "sv_SE", this will not change the numbers. So who knows how to do this, and can document exactly how this is done? And while we're at it, it would be really nice to get the core knowledge of how especially the handling of i18n in relation to: numbers dates currency And to set individual formats for each of these for each of our 6 used locales Many thanks in advance
We are working on/ developing 4-5 Dashboards with around 10 Charts in each Dashboard. When we work on multiple Dashboards,  we frequently face the Maximum Disk Usage error. And, there are also 10+ Al... See more...
We are working on/ developing 4-5 Dashboards with around 10 Charts in each Dashboard. When we work on multiple Dashboards,  we frequently face the Maximum Disk Usage error. And, there are also 10+ Alerts running every day and the Search results are configured to be stored for 24 hours. I am not sure what is the root cause of this error and where to optimize? 1. Due to data not readily available, we calculate the Country and Continent based on geo-location. The Country code will be delivered with the Events in the future. This Calculation happens for every event as part of the base search. 2. Number of Events: The dashboards are set to 'last 30 days' as default. Due to this, the Events to be handled and the size are big. 3. Alerts / Past search jobs: The Search results of the Alerts are stored for 24 hours. And, the Search job results are configured to be stored for 10 minutes. The searches related to Dashboard and the Alerts are mostly reporting/monitoring in nature. Hence, mostly aggregating the events and the final Search results would not be big. Could someone please clarify which would be causing this Memory issue? We could always request more disk space for the Developers who create the Dashboards. But, we would like to avoid the same issue coming up to Users as well. Hence, I would like to understand what could be the root cause of this issue.
Hello. We have a large number of devices that send syslog to Splunk that we need to ingest. All devices and Splunk is on premis. There are many different type of syslog messages that we need to colle... See more...
Hello. We have a large number of devices that send syslog to Splunk that we need to ingest. All devices and Splunk is on premis. There are many different type of syslog messages that we need to collect. As an example some of the source types are: web proxy logs firewall logs from different vendors web application logs dhcp logs and many many more... All devices currently send syslog to the same IP address and UDP port 514. Currently we manage this by having an rsyslog configration that is shared across servers using a puppet config. This allow us to edit the syslog configuration on one server and have it pushed out to all other servers. The rsyslog.conf file  identifies the type of syslog being received which file and directory the syslog message should be written to. The various files are written to disk and then an inputs.conf file is automatically updated to ensure that the file is ingested into Splunk. The file and directory path allows us to determine the index the data is written to and the sourcetype. This works, but is quite complex. The servers are currently based on centos 6 which is end of life in November. How do other people collect and manage syslog in their environments?  Thanks for your help.      
Hi Everyone, Below are my logs: RID:492e0bd2-d3c4-4d28-a318-c4aee5f4e0-of1-team_a-dmrupload ARC_EL:ARC_1100:  EVENT RECEIVED FROM SOURCE  This is the RID 492e0bd2-d3c4-4d28-a318-c4aee5f4e0 that I ... See more...
Hi Everyone, Below are my logs: RID:492e0bd2-d3c4-4d28-a318-c4aee5f4e0-of1-team_a-dmrupload ARC_EL:ARC_1100:  EVENT RECEIVED FROM SOURCE  This is the RID 492e0bd2-d3c4-4d28-a318-c4aee5f4e0 that I have extracted from the logs: My search query: index=ABCns=XYZ app_name=api  22abe6c4-6eaf-4d47-8c4a-79b2594e  Each RID has gone through  different events like for this RID"22abe6c4-6eaf-4d47-8c4a-79b2594e" as we have seen in the below logs it has gone through the events like "ARC SUCCESSFULLY UPDATED RESPONSE BACK TO SOURCE OR SF" and "ARC SUCCESSFULLY RECEIVED RESPONSE FROM TARGET" etc. 2020-09-30T05:03:34.604056922Z app_name=ABC environment=e1ns=HJ pod_container=api pod_name=deployment-20-lmkq6 message=2020-09-29 22:03:34.602 INFO [blaze-arc-service,,,] 1 --- [ elastic-3] c.a.b.a.c.s.impl.SFCallbackService : RID:22abe6c4-6eaf-4d47-8c4a-79b2594ea612-of1-team_g ARC_EL:ARC_1600: ARC SUCCESSFULLY UPDATED RESPONSE BACK TO SOURCE OR SF 2020-09-30T05:03:34.604056922Z app_name=ABC environment=e1ns=HJ pod_container=api pod_name=deployment-20-lmkq6 message=2020-09-29 22:03:34.602 INFO [blaze-arc-service,,,] 1 --- [ elastic-3] c.a.b.a.c.s.impl.SFCallbackService : RID:22abe6c4-6eaf-4d47-8c4a-79b2594ea612-of1-team_g ARC_EL:ARC_1600: ARC SUCCESSFULLY RECEIVED RESPONSE FROM TARGET what I want is now when I click on one particular RID suppose as a hyperlink it should open the events and if the RID has gone through the events it should be right tick otherwise cross. Below are my events: ARC EVENT RECEIVED FROM SOURCE  ARC FAILED TO INVOKE TARGET END POINT  ARC SUCCESSFULLY RECEIVED RESPONSE FROM TARGET  ARC FAILED TO RECEIVE RESPONSE FROM TARGET  ARC SUCCESSFULLY UPDATED RESPONSE BACK TO SOURCE OR SF  ARC FAILED TO UPDATE RESPONSE BACK TO SOURCE OR SF  ARC FAILED TO DOWNLOAD FILE FROM SOURCE OR SF  ARC S3 UPLOAD FAILED    Is that possible in splunk? Can someone guide me on that.
So as the title suggests, this is what I am trying to do. I've installed the vmware parts on the following machines: a search head which also acts as the scheduler, and a heavy forwarder which I wan... See more...
So as the title suggests, this is what I am trying to do. I've installed the vmware parts on the following machines: a search head which also acts as the scheduler, and a heavy forwarder which I want to use as data collection node. When I go on the searchhead to set the DCN as the forwarder, after entering the splunk user admin and the password (which is not the default password) I get the error: "No password found for this node please save a password".   I'm not sure what this means, and the documentation seems quite lacking. I'm not familiar enough with VMware or vCenter to accurately guess what this means. What I want exactly is to access the API I was provided. This API is basically just a hostname of a vCenter server. I haven't seen anything in the documentation about providing the API link..
I want to visualize the results in Grafana but Grafana is fetching data from indices corresponding to "Search & Reporting" only.