All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've been given output of a query that makes use of the "last 30 days" time range, and need to know exactly what "last 30 days" means. The data is aggregated, so the output does not have a date field... See more...
I've been given output of a query that makes use of the "last 30 days" time range, and need to know exactly what "last 30 days" means. The data is aggregated, so the output does not have a date field in it. I do not have access to Splunk directly so I cannot run test queries.  For example, if the query is run on 8-31-2022, does "last 30 days" give me 8/2 to 8/31 or would it be 8/1 to 8/30?
Hello,  I got duplicated forwarders reported in Cloud Monitoring Console. It appears the same amount of forwarders in active and missing status. Please, some workaround ? Rebuilding the forwarder a... See more...
Hello,  I got duplicated forwarders reported in Cloud Monitoring Console. It appears the same amount of forwarders in active and missing status. Please, some workaround ? Rebuilding the forwarder asset could be and option ? Forwarders report to Splunk Cloud through a Heavy Forwarder instance (Splunk Cloud) Thanks!
Good afternoon! I have a problem setting up alerts. Most allerts, with the exception of one, are processed incorrectly. Alerts are processed by scheduled, last 1 minute. By wrong - I mean their false... See more...
Good afternoon! I have a problem setting up alerts. Most allerts, with the exception of one, are processed incorrectly. Alerts are processed by scheduled, last 1 minute. By wrong - I mean their false positives, that is, they are constantly triggered by scheduled, even if the request conditions are not met during this period of time.   Despite the fact that requests in alerts work out correctly - as we need, I am convinced that the problem is in the syntax, since the settings for the correct alert and the problematic alerts are the same.   Examples:   Alert that works fine: index="main" sourcetype="testsystem-script11" | transaction maxpause=10m srcMsgId Correlation_srcMsgId messageId | table _time srcMsgId Correlation_srcMsgId messageId duration eventcount | fields _time srcMsgId Correlation_srcMsgId messageId duration eventcount | sort srcMsgId _time | streamstats current=f window=1 values(_time) as prevTime by subject | eval timeDiff=_time-prevTime | delta _time as timeDiff | where (timeDiff)>1   An example of a problematic alert (I thought that the problem was in Cyrillic characters, but I tried without them, it does not help):   index="main" sourcetype="testsystem-script99" resultcode>0 | eval srcMsgId_Исх_Сообщения=if(len('Correlation_srcMsgId')==0 OR isnull('Correlation_srcMsgId'),'srcMsgId','Correlation_srcMsgId') | eval timeValue='eventTime' | eval time=strptime(timeValue,"%Y-%m-%dT%H:%M:%S.%3N%Z") | sort -eventTime | streamstats values(time) current=f window=1 as STERAM_RESULT global=false by srcMsgId_Исх_Сообщения | eval diff=STERAM_RESULT-time | stats list(diff) as TIME_DIF list(eventTime) as eventTime list(srcMsgId) as srcMsgId_Бизнес_Сообщения list(routepointID) as routepointID count as Кол_Сообщений by srcMsgId_Исх_Сообщения
I want to filter the search results based on tx_id that I extract in the 2nd rex. Meaning only those results that have the transaction_id same as the tx_id. I tried the where clause but it doesn't wo... See more...
I want to filter the search results based on tx_id that I extract in the 2nd rex. Meaning only those results that have the transaction_id same as the tx_id. I tried the where clause but it doesn't work     {search_results} | rex field= MESSAGE "(?<JSON>\{.*\})" | rex field= MESSAGE "Published Event for txn_id (?<tx_id>\w+)"       I tried this :     {search_results} | rex field= MESSAGE "(?<JSON>\{.*\})" | rex field= MESSAGE "Published Event for txn_id (?<tx_id>\w+)" | where JSON.transaction_id in (tx_id)      
Say, we have events like this: _time fw src_ip dest_ip dest_port fw_rule_action 8/1/22 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 block 1/1/22 1:30:00.000 AM ... See more...
Say, we have events like this: _time fw src_ip dest_ip dest_port fw_rule_action 8/1/22 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 block 1/1/22 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 permit 12/31/21 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 permit   We want to find the events that changed based on fw_rule_action. The real world scenario can be that you consolidated the (whatever) rule base and after application, you want to see, if some events are permitted that were blocked in the past and vice visa. What is the right approach to find the (in this example) block events? Is creating a baseline the right way?
root@ubuntu-linux-22-04-desktop:/opt/splunk/bin# uname -a Linux ubuntu-linux-22-04-desktop 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:31:33 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux   sudo ... See more...
root@ubuntu-linux-22-04-desktop:/opt/splunk/bin# uname -a Linux ubuntu-linux-22-04-desktop 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:31:33 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux   sudo tar xvzf splunk-9.0.1-82c987350fde-Linux-x86_64.tgz -C /opt root@ubuntu-linux-22-04-desktop:/opt/splunk/bin# sudo ./splunk start --accept-license ./splunk: 1: Syntax error: Unterminated quoted string root@ubuntu-linux-22-04-desktop:/opt/splunk/bin#   Error on starting Splunk on UBUNTU (using Parallels Desktop on my apple Mac pro)  Model Name: MacBook Pro   Model Identifier: MacBookPro18,1   Chip: Apple M1 Pro   Total Number of Cores: 10 (8 performance and 2 efficiency)   Memory: 16 GB      
Hi all - I am having trouble pulling out mv fields into separate events. My data looks like this: I'd like to pull each event out into it's own line, but I'm having trouble with the carriage ... See more...
Hi all - I am having trouble pulling out mv fields into separate events. My data looks like this: I'd like to pull each event out into it's own line, but I'm having trouble with the carriage returns and getting the fields to pair correctly (i.e., error 1232 is with server 1).  Example search:       | makeresults | eval error="1232 2345 5783 5689 2345 5678 5901", server="server1 server2 server3 server4 server6 server9 server7" | makemv delim=" " error | makemv delim=" " server | eval uniquekey=mvzip(server,error, ":")         How do I separate these fields into their own events so the data looks like: 1232 server1 server1:1232 2345 server2 server2:2345 5783 server3 server3:5783
Good morning, Curious to see if anyone has used a similar dataset in Splunk and/or any suggestions on the best way to create a usable solution. I have a list of IP addresses, and for each IP addr... See more...
Good morning, Curious to see if anyone has used a similar dataset in Splunk and/or any suggestions on the best way to create a usable solution. I have a list of IP addresses, and for each IP address there is a list of allowable systems (IPs) . If any of the IP addresses communicate with systems outside of the allowable list I want to be alerted. I know I can probably create individual alerts for each of these but would like to be able to process these in bulk. For example, if Splunk could periodically cross reference the IP list against the network data to see if there are any violations. Could a lookup table be used for this?
I have followed instructions in https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/#To-create-modular-inputs-programmatically Further I am follo... See more...
I have followed instructions in https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/#To-create-modular-inputs-programmatically Further I am following the You Tube tutorial https://www.youtube.com/watch?v=-M3MWJGdNJE&t=1029s&ab_channel=Splunk%26MachineLearning I have installed Splunk Enterprise on my computer. It is Windows 10. I created a folder with the required structures.  In it is the inputs.conf.spec with the following: [tmdb_input:://<name>] api_key = <value> lang = <value> page_number = <value> region = <value> I have a tmdb_input.py file in the bin directory.  When I start up Splunk, the app "TMDB Input" is there. But when I go to Settings->Data Input, the input for my app is not present.  I have looked at the log files in "C:\Program Files\Splunk\var\log\splunk" but I cannot find anything that helps me to find the problem.  I have searched for ERRORS and any reference to my tmdb_input app in the logs. But nothing helps. Does anyone have any insight as to how I might troubleshoot the failure to see the "TMDB Input" in  Settings->Data Input?      
I have 2 fields: the values of fieldA are present in fieldB and I need to remove the first part of fieldB up to the values present in fieldA. For example: fieldA = BP498 fieldB = "A1John/Doe Sm... See more...
I have 2 fields: the values of fieldA are present in fieldB and I need to remove the first part of fieldB up to the values present in fieldA. For example: fieldA = BP498 fieldB = "A1John/Doe SmithBP498 XX XX XX" Desired: fieldB="BP498 XX XX XX" Thanks in advance.
Hi everyone, I am doing a search to find all the events that sent from different servers by hour, to find if any server is down, send nothing so that I will send an alert. raw data looks like thi... See more...
Hi everyone, I am doing a search to find all the events that sent from different servers by hour, to find if any server is down, send nothing so that I will send an alert. raw data looks like this: _time count 2022-09-27T10:17:48 1 2022-09-27T09:57:19 1 2022-09-27T09:56:28 1 2022-09-27T09:56:26 1 I search for events by span=1h and have a table like this: (There are several servers but  I put one for example) _time Server Count 27/09/2022 12:00 A 0 27/09/2022 11:00 A 0 27/09/2022 10:00 A 1 27/09/2022 09:00 A 3 27/09/2022 08:00 A 9 27/09/2022 07:00 A 10   It works now but not very fine for current hour. Imagine it's 12:05 now. when I run the search, I filter by count = 0 at 12:05, I have 2 line However, for example at 12:30, I receive the 1st event from server A, so the filter 0 returns 1 line of 11:00. What I want is to taken into account the count  = 0 only when the time passed 1 hour, to send a good alert. In the example, the filter 0 will return only if the server doesn't send anything during 1h (from 12:00 - 12:59) Currently I do something like this: | where count = 0 AND _time != relative_time(now(), "-1h") but do you have any better solution? I hope I make it clear. Thanks for your help!   
Hi, I am trying to run a python script on my universal forwarder which send data to splunk cloud instance. I have added the path in inputs.conf and there is not events found in my index. While ch... See more...
Hi, I am trying to run a python script on my universal forwarder which send data to splunk cloud instance. I have added the path in inputs.conf and there is not events found in my index. While checking on splunkd logs, there shows a error "The system cannot find the file specified". what could be the problem?
I want to create a Bar chart with the logs where the key would be the stats count field name and value would be the sum value Query :  search1 | eval has_error = if(match(_raw, "WARNING"),1,0)| s... See more...
I want to create a Bar chart with the logs where the key would be the stats count field name and value would be the sum value Query :  search1 | eval has_error = if(match(_raw, "WARNING"),1,0)| stats sum(has_error) as field1| join instance [search2 | eval has_error = if(match(_raw, "WARNING"),1,0)| stats sum(has_error) as field2| join instance [search3 | eval has_error = if(match(_raw, "WARNING"),1,0)| stats sum(has_error) as field3|join instance [search4  | eval has_error = if(match(_raw, "WARNING"),1,0)| stats sum(has_error) as field4]]] | stats sum( field1), sum(field2), sum( field3), sum( field4) Current result: field1 field2 field3 field4 30 44 122 6   Expected result: Field Count field1 30 field2 44 field3 122 field4 6
Can I convert a playbook-type input to automation in Splunk SOAR (5.3.4) Thanks for helping.
i All   There are query splunk like this :  (index=Prod sourcetype=ProdApp (host=Prod01 OR Prod02) source="/prodlib/SPLID" "Response" ERR-12120) | rex "^(?:[^\[\n]*\[){6}(?P<u>\w+)" | rex... See more...
i All   There are query splunk like this :  (index=Prod sourcetype=ProdApp (host=Prod01 OR Prod02) source="/prodlib/SPLID" "Response" ERR-12120) | rex "^(?:[^\[\n]*\[){6}(?P<u>\w+)" | rex field=_raw "(?<my_json>\{.*)" | spath input=my_json output=customerName path=response.login.customerName | spath input=my_json output=responseCode path=response.responseHeader.responseContext.responseCode | dedup customerName | table customerName,responseCode | append [search index=Prod sourcetype=ProdApp (host=Prod01 OR Prod02) source="/prodlib/SPLID" "Request") | rex "^(?:[^\[\n]*\[){6}(?P<u>\w+)" | rex field=_raw "(?<my_json>\{.*)" | spath input=my_json output=userId path=data.userId | dedup userId | table userId] I will try to join both source from Request and Response, and result like below attachment : My question  is, how show for 5 user id's ? (in blue line) Because i already try join both sources, the user id shown not related for the customer name (in black line) Picture
Hi team, we have performance log in splunk, but the storage policy is only for 3 month. so i can't see data metric trend from splunk for whole 1 year. Is there anyway splunk can ingest data into ... See more...
Hi team, we have performance log in splunk, but the storage policy is only for 3 month. so i can't see data metric trend from splunk for whole 1 year. Is there anyway splunk can ingest data into MongoDB? so that I can use powerBI to connect to MongoDB, and do analysis in PowerBI. Thanks, Cherie
 
For the type of data I am trying to extract, Event Sampling really speeds up the query. This works fine when executing SPL queries, but I have not been able to figure out how to do this in a dashboar... See more...
For the type of data I am trying to extract, Event Sampling really speeds up the query. This works fine when executing SPL queries, but I have not been able to figure out how to do this in a dashboard. Found some older posts where "rand" was used, but apparently that did not speed up the query.   Is it possible to specify Event Sampling directly in a Search Query or in the Dashboard in some way?
Hi, I have multiple panels that need to run timecharts like these: something | table _time,A,B</query> | search A="1"| timechart B something | table _time,A,B</query> | search A="2"| timechar... See more...
Hi, I have multiple panels that need to run timecharts like these: something | table _time,A,B</query> | search A="1"| timechart B something | table _time,A,B</query> | search A="2"| timechart B something | table _time,A,B</query> | search A="3"| timechart B I want to optimize my dashboard for performance by using a base search, so I tried this: <search id="base> <query> something | table _time,A,B</query> </search> .... <panel> <chart> <search base="base"> <query>search A="1"|timechart count by B</query> </search> </chart> </panel> ... <panel> <chart>   <search base="base"> <query>search A="2"|timechart count by B</query> </search> </chart> </panel> ... <panel> <chart> <search base="base"> <query>search A="3"|timechart count by B</query> </search> </chart> </panel> It works great on short times (24h) but with wider ranges (30 days) I lose events because of the base search limit (probably the default, 500,000). Is there a way I can use base search for this? I'm using Splunk Enterprise version 8.1.3.  
I am using the query as below and visualizing it in a line chart.  There is date field coming on the line chart and I want to remove it through XML without removing time field? Can someone guide me... See more...
I am using the query as below and visualizing it in a line chart.  There is date field coming on the line chart and I want to remove it through XML without removing time field? Can someone guide me .  (I was able to remove it in query using field format command but it was not super helpful as I was not able to see visualization.)  Also, I was able to remove the hover through this article - Solved: How to disable mouse hover on bar chart in XML - Splunk Community But not the date.  This one is very close to what I want to do, but didn't solve my case on the line chart.  Solved: How to delete the date category on a visualization... - Splunk Community Query for reference index=xyz sourctype=abc earliest = -60m@m latest = @m |eval ReportKey="Today" |append [search index=index=xyz sourctype=abc earliest = -60m@m-1w latest = @m-1w |eval ReportKey="LastWeek" | eval _time=relative_time(_time, "+1w")] |append [search index=index=xyz sourctype=abc earliest = -60m@m-2w latest = @m-2w |eval ReportKey="TwoWeeksBefore" | eval _time=relative_time(_time, "+2w")] |append [search index=index=xyz sourctype=abc earliest = -60m@m-3w latest = @m-3w |eval ReportKey="ThreeWeeksBefore" | eval _time=relative_time(_time, "+3w")] |timechart span = 1m count(index) as Volume by Reportkey