All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Theres a talk here with some demos : https://community.splunk.com/t5/Splunk-Tech-Talks/Machine-Learning-Assisted-Adaptive-Thresholding/ba-p/676851
Your initial search might not be the best way to get what you're searching in the first place. Remember that Splunk's subsearches have their limits and might behave weirdly and give empty or wrong re... See more...
Your initial search might not be the best way to get what you're searching in the first place. Remember that Splunk's subsearches have their limits and might behave weirdly and give empty or wrong results if those limits are reached.  
+1 on that. Whenever possible, don't use SHOULD_LINEMERGE=true. It's a very expensive setting causing Splunk to try to re-merge already split events into bigger ones. While it has some use in very sp... See more...
+1 on that. Whenever possible, don't use SHOULD_LINEMERGE=true. It's a very expensive setting causing Splunk to try to re-merge already split events into bigger ones. While it has some use in very specific border cases as a rule of thumb you should avoid using it completely. That's what proper LINE_BREAKER is for.
OK. It's a start. So UDP packets are reaching your receiver box. What then? Are you trying to receive the data directly on your Splunk instance (or a forwarder)? Or are you using some intermediate sy... See more...
OK. It's a start. So UDP packets are reaching your receiver box. What then? Are you trying to receive the data directly on your Splunk instance (or a forwarder)? Or are you using some intermediate syslog receiver like rsyslog or syslog-ng. Did you check your local firewall? If it's a linux box, did you verify rp_filter settings and routing?
Do you have an example of this? I'm trying to work through it but I can't get anything to work. 
You can expect Splunk to use environmental variables only in the cases documented in conf file specs. So if you want to use a variable's value you need to resolve the variable yourself within the scr... See more...
You can expect Splunk to use environmental variables only in the cases documented in conf file specs. So if you want to use a variable's value you need to resolve the variable yourself within the script.
Okay, so it looks like you're getting a list of dictionaries returned, so it shouldn't be too hard for us to figure this out. I'm not an expert code writer, by any means, but I've messed around with ... See more...
Okay, so it looks like you're getting a list of dictionaries returned, so it shouldn't be too hard for us to figure this out. I'm not an expert code writer, by any means, but I've messed around with it enough to be able to troubleshoot at least, so I'll try to help. You should be able to do something like this where 'example' is the response from the previous call: for x in example: for k, v in x: if k == 'name': server_name = v url='https://su-ns-vpx-int-1.siteone.com/nitro/v1/config/lbvserver_servicegroup_binding/' + server_name <create API call here using new url>
Thanks @richgalloway Trying it out now. will let you know if it works.
Hi Splunkers The idea is to pull any new file creations on a particular folder inside C:\users\<username>\appdata\local\somefolder i wrote a batch script to pull and index this data. its working bu... See more...
Hi Splunkers The idea is to pull any new file creations on a particular folder inside C:\users\<username>\appdata\local\somefolder i wrote a batch script to pull and index this data. its working but the issue is i cannot define a token for users. eg: In script if i mention the path as C:\users\<user1>\appdata\local the batch script will run as expected an data will be indexed to splunk but if i mention the user1 as %userprofile% or %localappdata% the batch script is not running. How to resolve this    
Shooting out one Response since it applies to all. Thank you everyone for your guidance and knowledge! Always helps to have a strong community to back some of these questions so I appreciate it!
Try these settings [applog_test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+)ERROR NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true SHOULD_LINEMERGE = false TIME_FORMAT =... See more...
Try these settings [applog_test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+)ERROR NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX = ERROR\s+ Don't specify BREAK_ONLY_BEFORE_DATE if you want to break at something other than a date.  Also, don't use both BREAK_ONLY_BEFORE_DATE and LINE_BREAKER in the same stanza.  When using LINE_BREAKER, set SHOULD_LINEMERGE to false.
@gcusello , I am looking for only forwarder and Indexer.
If your app is running in IIS, and you restarted both the agent and the IIS, it should work. Here are some questions 1. If you are running a machine agent on the same server, please ensure that i... See more...
If your app is running in IIS, and you restarted both the agent and the IIS, it should work. Here are some questions 1. If you are running a machine agent on the same server, please ensure that in the machine agent controller-info.xml you set dotnet compatibility mode to "true". 2. Ensure to generate some load on the IIS application, it will only register the tiers if there is load on them. Easiest way is to just open the browse option for whichever applications you have deployed and clicking through the app or refreshing the base page a number of times.
So, if an ip address from lookup_ist_cs_checkin_rooms.csv matches with a message "display button:panel-*" and it matches with an ip address in a message with "Ipaddress(from request header)", do you ... See more...
So, if an ip address from lookup_ist_cs_checkin_rooms.csv matches with a message "display button:panel-*" and it matches with an ip address in a message with "Ipaddress(from request header)", do you want to include it or exclude it? That is, which condition takes precedence?
You are absolutely right. Splunk ran under root account. I have changed it already, but it didn't help. Normal universal forwarders works great, only Splunk servers don't change configuration. But I... See more...
You are absolutely right. Splunk ran under root account. I have changed it already, but it didn't help. Normal universal forwarders works great, only Splunk servers don't change configuration. But I will handle it using ../local/ files as you suggested. Thank you,  
You could do something like this | rex "process (?<process>\d+) start date (?<start>\S+), end date (?<end>\S+)" | eval startdate=strptime(start,"%d/%m/%Y") | eval enddate=relative_time(strptime(end,... See more...
You could do something like this | rex "process (?<process>\d+) start date (?<start>\S+), end date (?<end>\S+)" | eval startdate=strptime(start,"%d/%m/%Y") | eval enddate=relative_time(strptime(end,"%d/%m/%Y"), "+1d") | eval days=mvappend(startdate, enddate) | eval row=mvrange(0,2) | mvexpand row | eval _time=mvindex(days, row) | eval count=1-(row*2) | stats sum(count) as change by _time | streamstats sum(change) as total | makecontinuous _time | filldown total | fillnull value=0 change
@R15 wrote: Neither are working for me. Their search gives an unwieldy table with 100+ columns, yours has only blanks for avg and max.  Splunk 9.1.2 If not here's the query,   |rest /servi... See more...
@R15 wrote: Neither are working for me. Their search gives an unwieldy table with 100+ columns, yours has only blanks for avg and max.  Splunk 9.1.2 If not here's the query,   |rest /services/search/jobs | stats count avg(performance.command.search.expand_search.duration_secs) AS avg max(performance.command.search.expand_search.duration_secs) AS max BY search    
  The following query retrieves confroom_ipaddress values from the lookup table that do not match IP addresses found in the indexed logs: | inputlookup lookup_ist_cs_checkin_rooms.csv where NOT [s... See more...
  The following query retrieves confroom_ipaddress values from the lookup table that do not match IP addresses found in the indexed logs: | inputlookup lookup_ist_cs_checkin_rooms.csv where NOT [search index=fow_checkin message="display button:panel-*" | rex field=message "ipaddress: (?<ipaddress>[^ ]+)" | stats values(ipaddress) as confroom_ipaddress | table confroom_ipaddress] | rename confroom_ipaddress as ipaddress1 I would like to add an additional condition to include IP addresses that match those found in the following logs:   index=fow_checkin "Ipaddress(from request header)" | rex field=message "IpAddress\(from request header\):\s*(?<ip_address>\S+)$" | stats values(ip_address) as ip_address2 This means we need to include IP addresses from lookup_ist_cs_checkin_rooms.csv that match with the message "Ipaddress(from request header)" andexclude IP addresses from lookup_ist_cs_checkin_rooms.csv that match with the message "display button:panel-*"  as well. Please help.
Hi @AL3Z , yes, it's possible, but you should define the purpose, the borders and the Use Cases of your lab. In other words: what architecture you need to test: a distributed environment? only forw... See more...
Hi @AL3Z , yes, it's possible, but you should define the purpose, the borders and the Use Cases of your lab. In other words: what architecture you need to test: a distributed environment? only forwarder and Indexer? what else? In my lab I have seved virtual machines with two Indexers, three Search Heads, a Management Node (Cluster Manager, Deployer, License Master, Monitoring Console and Deployment Server), one Universal Forwarder. I did it on pc pc that hase 16 vCPUs and 32 GB TAM). As I said, you can do this, it depends on your requirements and the resources you have. Ciao. Giuseppe
Hi all, hoping someone can help me with this query. i have a data set that looks at a process and how long it takes to implement. for example, each event will be populated with a start date and an... See more...
Hi all, hoping someone can help me with this query. i have a data set that looks at a process and how long it takes to implement. for example, each event will be populated with a start date and an end date. i want to create a calendar view that shows the schedule of the processes in implementation, for example: process 1 start date 12/08/2024, end date 16/08/2024 (5 days implementation) process 2 start date 12/08/2024, end date 12/08/2024 (1 day implementation) process 3 start date 13/08/2024, end date 15/08/2024 (3 days implementation) process 4 start date 14/08/2024, end date 16/08/2024 (2 days implementation) I want to be able to produce a graph or a calendar view that will show how many process' we have in implementation, counting each day of their implementation period (based on start and end date) so for the above example it would look like: Date                        count of Process' in implementation 12/08/2024       2 (process 1 and 2) 13/08/2024       2 (process 1 and 3) 14/08/2024       3 (process 1, 3 and 4) 15/08/2024       3 (process 1, 3 and 4) 16/08/2024       2 ((process 1 and 4) any help greatly appreciated