All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there any documentation on specifically just upgrading a stand alone search head? I found documentation always for a cluster so I just want to make sure I document the correct steps for a change r... See more...
Is there any documentation on specifically just upgrading a stand alone search head? I found documentation always for a cluster so I just want to make sure I document the correct steps for a change request before actually doing it. 
I have a job that runs multiple times if it failed. I need to create a dashboard with a table that shows all the attempts with status.  Logs {id:"1",retrynumber:"1",uniqueid:"23213131",status:"Fa... See more...
I have a job that runs multiple times if it failed. I need to create a dashboard with a table that shows all the attempts with status.  Logs {id:"1",retrynumber:"1",uniqueid:"23213131",status:"Failed"} {id:"1",retrynumber:"2",uniqueid:"43434333",status:"Failed"} {id:"1",retrynumber:"3",uniqueid:"23213132",status:"Failed"} {id:"1",retrynumber:"4",uniqueid:"23213154",status:"Passed"} I want to have table like: id     retry1     retry2     retry3     retry 5 1      Failed      Failed      Failed      Passed  
Hi,  I want to display the error details in the last 30 mins, so they can be investigated, when the amount of errors has increased by 10% from the previous 30mins.  Search 1 This is the search... See more...
Hi,  I want to display the error details in the last 30 mins, so they can be investigated, when the amount of errors has increased by 10% from the previous 30mins.  Search 1 This is the search for the data I want to show in the results    index=myindex source=mysource sourcetype=mysourcetype FailureReason IN ("*Error1*", "*Error2*", "*Error3*") | table ReqReceivedTimestamp, APIName, ReqUrl, ShopName, ResponseCode, FailureReason, FailureServiceCalloutResponse   Search 2 This is the search I have to work out if there are over 10% compared to the last 30 mins   index=myindex source=mysource sourcetype=mysourcetype FailureReason IN ("*Error1*", "*Error2*", "*Error3*") | timechart span=30m count as server | streamstats window=1 current=f values(server) as last30 | eval difference=server-last30 | eval percentage_change=round((difference/last30)*100,2) | eval AboveThreshold=if(round(((server-last30)/last30),4)>.10, "True", null()) | where AboveThreshold = "True" | table percentage_change   I want to understand what is the best way to get these 2 searches combined and show the table from Search 1 when  Search 2 >10%
Hii, good day everyone I need your help please. I need to join a log that gives me the events by date, but I require it from the beginning of the event, to the end of it (the event begins when a car... See more...
Hii, good day everyone I need your help please. I need to join a log that gives me the events by date, but I require it from the beginning of the event, to the end of it (the event begins when a card is entered and ends when the mentioned card is extracted) I have tried to do it by means of Regex from the sourcetype, but not success.
Hello, this is my first experience with SplunkCloud and I would like to ask for some help. I am trying to forward logs from fortinet to my Heavy Forwarder, I have configured UDP port 514 and source... See more...
Hello, this is my first experience with SplunkCloud and I would like to ask for some help. I am trying to forward logs from fortinet to my Heavy Forwarder, I have configured UDP port 514 and sourcetype fortigate_log as per the option presented in datainputs. After the settings and index choice, I started searching for the events but without success. Can you help me configure so that the events appear in both Heavy Forwarder and Splunk Cloud? NOTE: My environment has two Heavy Forwarders and no Deployment server, the communication is direct with the Splunk Cloud.  
Having a hard time understanding what Splunk Observability does that you can't do with Splunk Platform (Cloud or Enterprise)? Aren't you able to take in logs, metrics, and traces and do real-time rep... See more...
Having a hard time understanding what Splunk Observability does that you can't do with Splunk Platform (Cloud or Enterprise)? Aren't you able to take in logs, metrics, and traces and do real-time reporting, monitoring and visualizations with Splunk Platform? And isn't Splunk Platform used to detect and solve issues? Isn't that the same as what Observability does? Thanks a lot for your help in advance!
How do I check which major destinations generate the most logs on a specific firewall host = 10.22.44.254? I would like to know the correct command to know the main destinations and also how to filte... See more...
How do I check which major destinations generate the most logs on a specific firewall host = 10.22.44.254? I would like to know the correct command to know the main destinations and also how to filter without them, to know how much license I would save if I don't receive them?
I have a saved search running every few minutes to append data to a 15 day csv log file within Splunk.  I'm trying to use a timechart with timewrap to compare yesterday's values between 6am and 8pm... See more...
I have a saved search running every few minutes to append data to a 15 day csv log file within Splunk.  I'm trying to use a timechart with timewrap to compare yesterday's values between 6am and 8pm, with the same period a week earlier. If I run the search before 6am, I get exactly what I want - two bell-shaped series on a timechart showing a single day from 6am to 8pm. However, if I run the exact same search after 6am, I get four series on a timechart spanning two days - on the left of the chart a comparison of two series up until 8pm, then a blank period in the middle of the chart from 8pm to 6am the following day, and then on the right, a comparison of two series from 6am.     | inputlookup fifteen_day_logfile.csv | where (_time>=relative_time(now(),"-8d@d+6h") AND _time<=relative_time(now(),"-8d@d+20h")) OR (_time>=relative_time(now(),"-1d@d+6h") AND _time<=relative_time(now(),"-1d@d+20h")) | timechart span=5m cont=false sum(Value) as Value | timewrap 1d     Basically, I'm stumped as to why timewrap is sometimes ignoring the relative_time statements, depending on what time of day it is run. Any help would be much appreciated.
Hi, I have a lookup as follow ip id name 111.111.111.111 111 simone * 222 marco in the index I have  ip id  111.111.111.111 111 222.2... See more...
Hi, I have a lookup as follow ip id name 111.111.111.111 111 simone * 222 marco in the index I have  ip id  111.111.111.111 111 222.222.222.222 222 333.333.333.333 222   the result I'm looking for is the following ip id  name 111.111.111.111 111 simone 222.222.222.222 222 marco 333.333.333.333 222 marco   can help me? thank you Simone
I have the following search:   index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Response Codes:\s(?<... See more...
I have the following search:   index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Response Codes:\s(?<responseCode>\d{1,3})" | rex field=Message ":\s(?<errCount>\d{1,4})$" | FIELDS "Action" "Region" "responseCode" "errCount" | timechart sum(errCount) by Region which is returning the following events: Time Action responseCode Region errCount 21/11/2022 09:46:07 Resp_VPMG 912 VPMG - Wizink PRD-E5 14 21/11/2022 09:16:31 Resp_VPMG 911 Moneta IBS via VPMG 8 21/11/2022 03:02:07 Resp_VPMG 911 Moneta IBS via VPMG 129 21/11/2022 02:46:59 Resp_VPMG 911 Moneta IBS via VPMG 92 20/11/2022 20:31:38 Resp_VPMG 911 Moneta IBS via VPMG 16 20/11/2022 19:31:36 Resp_VPMG 911 Moneta IBS via VPMG 32 20/11/2022 02:26:45 Resp_VPMG 911 Addiko IBS via VPMG 7   and I can display the results on a bar chart like this: but I have no visibility of the 'responseCode' field.    If I copy the data into PowerBI, I can easily get a visualisation like this:   which shows Errors by region and by responseCode (using a PowerBI 'Small Measures', which seems to be the equivalent of Splunk's 'Trellis'). Can I recreate this visualisation in Splunk? Using the Trellis option only allows me to trellis the report by Region and not by responseCode.  Thanks.  Steve  
Hi Everyone, I have 3 pie charts in a panel, showing agent statistics as follows: - 1st pie chart displays overall statistics split by analyst; - 2nd pie chart displays daily statistics split b... See more...
Hi Everyone, I have 3 pie charts in a panel, showing agent statistics as follows: - 1st pie chart displays overall statistics split by analyst; - 2nd pie chart displays daily statistics split by analyst ( | where shift="Day") - 3rd pie chart displays nightly statistics split by analyst ( | where shift="Night"). I've created a drilldown which works fine for the overall pie chart and it correctly displays the data in another panel based on the value of the slice.  To accomplish this I`ve created a token called "tokNames" and assigned an initial value of ALL *. <init> <set token="tokNames">*</set> </init> Drilldown for the Overall pie chart: <drilldown> <set token="tokNames">$click.value$</set> </drilldown> The problem starts with the daily and nightly pie charts - when I click on a name, it displays all the statistics of that particular agent, instead of showing only the daily or only the nightly statistics. Any assistance would be greatly appreciated. Thank you in advance. Toma.
I am getting an error on both of my indexers when they attempt to cluster to the master node   Search peer Splunkindex1 has the following message: failed to register with cluster master reason: ... See more...
I am getting an error on both of my indexers when they attempt to cluster to the master node   Search peer Splunkindex1 has the following message: failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=splunkmaster:8089 rv=0 gotConnectionError= 1 gotUnexpectedStatusCode=0 actual_response_code=502 expected_response_code=2xx staus_line="Error connecting: Winsock error 10061" socket_error="Winsock error 10061" remote_error=[event=addPeer status=retrying Add PeerRequest.... Does anyone have a solution for this? Thank you
Hi All, I am new to AppD. I recently deployed a DB agent for the oracle database running in a Linux machine. I am able to see the DB metrics, however, I am not able to see the hardware metrics. I h... See more...
Hi All, I am new to AppD. I recently deployed a DB agent for the oracle database running in a Linux machine. I am able to see the DB metrics, however, I am not able to see the hardware metrics. I have enabled the hardware monitoring in the collector as well. ( I have seen in the documentation that for Oracle by default it will show the hardware metrics). Even in the metric also not able to see the details. Can you please suggest what I am missing?
Hello, Myself and another gentleman have been tasked to integrate NSX-T TLS log forwarding to Splunk. Is there a list of exact instructions or a white paper showing how to accomplish this? Do we nee... See more...
Hello, Myself and another gentleman have been tasked to integrate NSX-T TLS log forwarding to Splunk. Is there a list of exact instructions or a white paper showing how to accomplish this? Do we need to have our purchasing folks reach out for support as well? Very respectfully, James
Hello All    I am currently using Ingest Action on a HF to route my data both on indexers and S3 bucket. I managed to create my ruleset and everything is working fine with data being successfully... See more...
Hello All    I am currently using Ingest Action on a HF to route my data both on indexers and S3 bucket. I managed to create my ruleset and everything is working fine with data being successfully sent to AWS but I can't access to the Ingest Action WebUI anymore. When I try to access it, I got the message Splunk is still initializing. Please retry later.   Does someone have an idea on how to fix this issue ?   Thanks
I have a dashboard that requires a dropdown in one of the lower panels of the page The selectFirstChoice option appears to be the cause of it as when it populates the page jumps down Anyway to wo... See more...
I have a dashboard that requires a dropdown in one of the lower panels of the page The selectFirstChoice option appears to be the cause of it as when it populates the page jumps down Anyway to work around this? Thanks
Want to create a Splunk alert for Servers traffic distribution. I have 100's of different type servers in each data center (like app servers, db servers etc.). I can create a dashboard and splunk ale... See more...
Want to create a Splunk alert for Servers traffic distribution. I have 100's of different type servers in each data center (like app servers, db servers etc.). I can create a dashboard and splunk alert for specific set of servers. But here I want to create this dashboard and splunk alert on basis of datacenter. So how I can create this type of requirement ?  Per host wise, below query I written for reference, But data center wise all hosts can i put it in one query and write ?  index=* | where host like "ANCLOPR%" | bin span=5m _time | stats count BY _time host | eventstats sum(count) as total by _time | eval percent = count / total*100 | chart values(percent) by _time host usenull=f useother=f limit=100      
Time door Fruit Count 11/11/2022 04:36:07 112 APPLE 14 11/11/2022 04:10:00 111 PEAR 8 11/11/2022 03:01:02 111 PEAR 119 11/11/2022 02:41:49 111 PEAR 82 10/11/2022 21:41:18 111 PEAR 26 10/11/2022... See more...
Time door Fruit Count 11/11/2022 04:36:07 112 APPLE 14 11/11/2022 04:10:00 111 PEAR 8 11/11/2022 03:01:02 111 PEAR 119 11/11/2022 02:41:49 111 PEAR 82 10/11/2022 21:41:18 111 PEAR 26 10/11/2022 18:11:16 111 PEAR 12 10/11/2022 01:36:15 111 Orange 5 i want to plot timechart graph with count of fruits for each door 
Hey, I have a big base search  and I want to add a condition in the search that would remove/ filter out Asset_State if either Development or "Pre-Production" ONLY IF     Asset_Environment!="PKI Of... See more...
Hey, I have a big base search  and I want to add a condition in the search that would remove/ filter out Asset_State if either Development or "Pre-Production" ONLY IF     Asset_Environment!="PKI Offline" Status="2. At the moment, this is the line in the query I have for this: .......| if(Asset_Environment!="PKI Offline" Status="2, search NOT (Asset_State!="Development" OR Asset_State!="Pre-Production") |.... Syntactically, I know this is incorrect .... can someone please help??? Many thanks as always!!!
Hi guys, I have an issue with the Enterprise Security APP where I try to add a new Event Attributes (user) that is correctly populated and available in the event (in the Contributing Events search)... See more...
Hi guys, I have an issue with the Enterprise Security APP where I try to add a new Event Attributes (user) that is correctly populated and available in the event (in the Contributing Events search) and in the datamodel, but it is not showed in the Incident Review table. It seems that It can be an error with the alias of the field because in the data raw we see that the field name is "userPrincipalName" but in the Interesting Field we have "user" (the field that is now showed in the Incident Review table). We also tried adding the userPrincipalName field to the Event Attributes but also this field is not populated. How can we show that field in the table? Thanks, Mauro