All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk Community,   I cannot figure out how to update a kvstore lookup table with a scheduled alert. I want to append a search result every minute to an already exisitng kvstore lookup table... See more...
Hello Splunk Community,   I cannot figure out how to update a kvstore lookup table with a scheduled alert. I want to append a search result every minute to an already exisitng kvstore lookup table. But when I specify said kvstore lookup table in the alert settings of splunk, it tells me that the file name of my kvstore lookup table has to end with .csv, which it doesn't. But I don't want it to because I am not using a static lookup table, I am using a kvstore lookup table. Can I not use alerts with kvstore lookups?   Cheers Fritz  
Hi, I have a column chart with multiple overlaying fields (see blue orange and yellow lines below). Right now i am displaying values for all of them. is there a way to only show data values for one ... See more...
Hi, I have a column chart with multiple overlaying fields (see blue orange and yellow lines below). Right now i am displaying values for all of them. is there a way to only show data values for one of those lines?  Thank you.    
Inside nodes --->JMX --> JMX metrics -->view JMX metrics -->JMX -->Thread Pools -->threadPoolModule -->Webcontainer -->poolsize I am not able to find out why inside webcontainer only poolsize is c... See more...
Inside nodes --->JMX --> JMX metrics -->view JMX metrics -->JMX -->Thread Pools -->threadPoolModule -->Webcontainer -->poolsize I am not able to find out why inside webcontainer only poolsize is coming, here we should also get the metrics ActiveCount
I would like to break "X" field into multiple field based on available value. "X" contain data in following format.  Field X- ABC: YES, APPLICATION: DEF, ZONE: DATA, ENVIRONMENT: DEV ZONE: INSIDE,... See more...
I would like to break "X" field into multiple field based on available value. "X" contain data in following format.  Field X- ABC: YES, APPLICATION: DEF, ZONE: DATA, ENVIRONMENT: DEV ZONE: INSIDE, ENVIRONMENT: PROD ZONE: OUTSIDE, ENVIRONMENT: DEV, ABC: YES, APPLICATION: IJK ======= I would like the arrange data in following format.   ABC     APPLICATION             ZONE           ENVIRONMENT YES           DEF                          DATA                      DEV                                                      INSIDE                  PROD YES           IJK                           OUTSIDE             DEV   TIA.   
Hi, I try to figure this out for a week now and I am stucked. I installed the Microsoft 365 Defender Add-on for Splunk, which is the official supported TA referring to the Microsoft Partner docs. ... See more...
Hi, I try to figure this out for a week now and I am stucked. I installed the Microsoft 365 Defender Add-on for Splunk, which is the official supported TA referring to the Microsoft Partner docs. I enabled the input for endpoint alerts and excepted the TA to index all alerts since the "start time" (2 weeks ago). But only one event was send, the earliest event in the 14 days period. So in my case an event from 6/9/2021, the input was enabled on 23rd of June. Splin internal is only telling me that the connection was successfull ( status 200) and how long it took. I double checked all the siem intergration docs from microsoft , like what permissions need to be set and so on. Here is the link https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/api-hello-world?view=o365-worldwide Does anyone know, if there are more options in azure that need to be turned on to make it more talkativ? In Defender itself I can see way more alerts than 1 in the last 14 days. Thank you
Hi,  Want to apply below arrow dynamically in my dashboard. Hav used custom JS and CSS for it. However, it is not working for my query : index = *highjump_server* | search Server_location=Abilene|... See more...
Hi,  Want to apply below arrow dynamically in my dashboard. Hav used custom JS and CSS for it. However, it is not working for my query : index = *highjump_server* | search Server_location=Abilene|rename Server_location as DC|table DC, Server_Msg,Server_type,Status_Code| stats sum(Status_Code) AS Status | eval B=case (Status=0,"UP",Status=1,"DOWN") |eval _time=now() |table B _time Whereas it is working for |makeresults | eval A ="UP"| eval B="DOWN" | fields B.Not able to understand y though. Can you guys pls help here urgently ?  
Hi, I have a problem with splunk that is getting too long as I can't find the problem. I have a laboratory in which I want to access Splunk through the users I have in LDAP, as it is a laboratory, I... See more...
Hi, I have a problem with splunk that is getting too long as I can't find the problem. I have a laboratory in which I want to access Splunk through the users I have in LDAP, as it is a laboratory, I have only created the "Admin" by default and another called "Josemaría" if necessary, but I always see myself with this trouble. My configuration is as follows.     And if it helps, my tree in LDAP which is very small for that reason, because it is for testing. I need help and would greatly appreciate those who try to help me. Thank you very much in advance, greetings!
So currently  i have: |Name                     | Branch                    | Age --------------------------------------------------------- |   Tom                     |  USA                      ... See more...
So currently  i have: |Name                     | Branch                    | Age --------------------------------------------------------- |   Tom                     |  USA                          | 21 |   Tom                     |  India                         | 23 |    Pat                      |  India                         | 26 If someone can please show me how to find the "Tom" matches on the "Name" field and then change the branches to USA for both the toms.  Thanks.
I am encountering problems joining 2 querries that are getting values from 2 different sourcetypes. I would like to get the CPU load and maximum (s well as a trend line) of all my hosts, filtered ... See more...
I am encountering problems joining 2 querries that are getting values from 2 different sourcetypes. I would like to get the CPU load and maximum (s well as a trend line) of all my hosts, filtered by HostGroup. But the HostGroups are only linked to hosts in sourcetype=entity, and the value of CPU loads are located in sourcetype=metrics. I'll link the 2 queries and an explaination of the results, as for the combination I tried to make of the 2 queries.     index=my_index sourcetype="metrics" timeseriesId="host.cpu.user" | eval _time = strptime(timestamp, "%Y-%m-%d %H:%M:%S") | stats avg(value) as AvgCPU, max(value) as MaxCPU, values(unit) as Unit, sparkline(avg(value)) as Trend by hostName | eval AvgCPU = round(AvgCPU,2), MaxCPU = round(MaxCPU, 2)]     This query returns the average and maximum CPU load per host, which is the result I'm trying to get to, but sorted by HostGroup. And the only way for me to filter hosts by HostGroup is to use this query :     index=my_index sourcetype="entity" hostGroup.name="*" | spath | stats values(discoveredName) as hostName by hostGroup.name     So I tried combining the two queries using the mvexpand command :     index=my_index sourcetype="entity" hostGroup.name=$hostGroup_token$ | spath | stats values(discoveredName) as hostName | mvexpand hostName | join [ search index=my_index sourcetype="metrics" timeseriesId="host.cpu.user" | eval _time = strptime(timestamp, "%Y-%m-%d %H:%M:%S") | stats avg(value) as AvgCPU, max(value) as MaxCPU, values(unit) as Unit, sparkline(avg(value)) as Trend | eval AvgCPU = round(AvgCPU,2), MaxCPU = round(MaxCPU, 2)]       The problem is that this particular query returns only 1 value that is the average  and maximum value of CPU load and max of all my hosts.  Any idea on how to join the 2 queries so that it returns the CPU load and max filtered by HostGroup ?
Hello, I am ingesting  files containing host and ports for each host. For each Source (FILE) The Nodes(host) and ports are being extracted and since I have many ports per node I have data as: ... See more...
Hello, I am ingesting  files containing host and ports for each host. For each Source (FILE) The Nodes(host) and ports are being extracted and since I have many ports per node I have data as: FILE #1, NodeX, Port: 443/tcp, 80/tcp,21/tcp (and more...) -->using mvexpand for FILE#1 I have: Node, Port X, 443 X, 80 X,21  -->for FILE#2 I have the same --> for file #3, I have: X, 443 X, 80 (one port is missing, or in other scenarios added) so the count for Node X per FILE will be FILE,     Count File#1, 3 File#2, 3 File#3, 2 I want to grab when there is a change in that count and raise an alert. I managed to show it on chart, but as I have many nodes, the chart is not suitable. Can any one advise the best way to grab this variance and set the alert? Thank you.  
I have a CSV file with the below data, trying to push to Splunk. Example -  Thu JUN 24  15:27:52 +08 2021,name1,address1,Thu  JUN25  12:27:52  +08 2021,Active Thu JUN 24  15:27:52 +08 2021,name2,a... See more...
I have a CSV file with the below data, trying to push to Splunk. Example -  Thu JUN 24  15:27:52 +08 2021,name1,address1,Thu  JUN25  12:27:52  +08 2021,Active Thu JUN 24  15:27:52 +08 2021,name2,address2,Thu JUN 25  03:65:52  +08 2021,Active Thu JUN 24  15:27:52 +08 2021,name3,address3,Thu JUN 25  05:15:52  +08 2021,Active Thu JUN 24  15:27:52 +08 2021,name4,address4,Thu MAY26  06:25:52  +08 2021,Active Thu JUN 24  15:27:52 +08 2021,name5,address5,Thu MAY26  06:15:52  +08 2021,Active Thu JUN 24  15:27:52 +08 2021,name6,address6,Thu JAN14  07:15:52  +08 2021,Active props setting in props using fourth field as timestamp.     SHOULD_LINEMERGE= FALSE FIELD_DELIMETER=, HEADER_FIELD_DELIMETER=, FIELD_NAMES=Time,names,address,creationtime,status TIMESTAMP_FIELDS=creationtime TZ=Asia/Singapore     by using the above props I can able to push only the latest date data, other events are missing in Splunk. for example, I can see only JUN25th data. remaining events are missing. Can someone explain, what might be the cause.  
I want to get error logs counts from windows event logs from multiple servers. Want to create a separate dashboard where i can see the error logs counts in chart format. and below that i can get the... See more...
I want to get error logs counts from windows event logs from multiple servers. Want to create a separate dashboard where i can see the error logs counts in chart format. and below that i can get the error logs in detail.
Kindly help me out with Query to find top 10 indexers w.r.t index max allocated storage.
When using PCRE regex to split a field into components, I find it frustrating.  I know my regex works as I've validated this in both regex101 and debuggex.   (?:.*?)(?P<ClientIP>(?:\d{1,3}\.){3... See more...
When using PCRE regex to split a field into components, I find it frustrating.  I know my regex works as I've validated this in both regex101 and debuggex.   (?:.*?)(?P<ClientIP>(?:\d{1,3}\.){3}\d{1,3}|(?:(?:[0-9a-f]{1,4}(?::+)?){0,7}:+[0-9a-f]+))[,\n\r]+(?:(?:[\+](?P<LB_IP>[^:](?:\d{1,3}\.){3}\d{1,3}):(?P<LB_Port>\d+)))?   So this is extracting details from the IIS X_Forwarded_For field.  The supplied log data that was parsed extracted perfectly on both platforms and even using "grep -P".  But in Splunk, I only get a full extraction when the following format is observed. 123.123.123.123,+123.123.123.123,+123.123.123.123:12345 If  the final ip:port is missing from the event, only the first IP is captured 123.123.123.123,+123.123.123.123 I've had similar experiences over the years with Splunk, so I'm wondering if my regex fu is rubbish, regex validators are wrong or splunk has a bug that's never been fixed. TIA Steve
Hi, I am attempting to create a simple column chart using JSON data from a single event. The Rows{}.S03PERFC value represents a percent number... I have attempted to convert this but still can't se... See more...
Hi, I am attempting to create a simple column chart using JSON data from a single event. The Rows{}.S03PERFC value represents a percent number... I have attempted to convert this but still can't seem to get it to display as a chart. For some reason the fields are greyed out at the bottom. Anything obvious I'm missing here?  
Hi, I've written a query query below which joins 2 different event types from same source with different filters. source="C:\\Logs\\*" host="myHost" index="dumpchutes" "TELEGRAMID=[42]" "LastDes... See more...
Hi, I've written a query query below which joins 2 different event types from same source with different filters. source="C:\\Logs\\*" host="myHost" index="dumpchutes" "TELEGRAMID=[42]" "LastDestination=[51]" | rex "OriginalDestination=\[(?<OriginalDestination>[^\]]+)" | rex "OriginalDestinationState=\[(?<OriginalDestinationState>[^\]]+)" | rex "EntrancePoint=\[(?<EntrancePoint>[^\]]+)" | rex "EntranceState=\[(?<EntranceState>[^\]]+)" | rex "ExitPoint=\[(?<Chute>[^\]]+)" | rex "ExitState=\[(?<ExitState>[^\]]+)" | rex "BarcodeScannerId=\[(?<BarcodeScanner>[^\]]+)" | rex "BarcodeScannerDataState=\[(?<BarcodeScannerDataState>[^\]]+)" | rex "BarcodeScannerData=\[(?<LPN>[^\]]+)" | rex "Length=\[(?<Length>[^\]]+)" | rex "Width=\[(?<Width>[^\]]+)" | rex "Height=\[(?<Height>[^\]]+)" | join LPN [ search index="dumpchutes" source="C:\\Logs\\*" "TELEGRAMID=[CONTAINERSTATUS]" "ContainerState=[DIVERTED]" "Divert=[PALLETIZE]" "ReasonCode=[DS]" |rex "LocationBarcode=\[(?<LocationBarcode>[^\]]+)" |rex "PalletId=\[(?<ChuteHolder>[^\]]+)" |rex "LpnNumber=\[(?<LPN>[^\]]+)" |rex "WcsLocation=\[(?<WcsLocation>[^\]]+)" |rex "LocationBarcode=\[(?<LocationBarcode>[^\]]+)" ] |table _time Chute ChuteHolder LPN Length Width Height | dedup LPN   I'm having a number of issues here: Using joins is too slow . is there another way instead of using join? is it better to save this both searches into 2 tables and then join them in the search part?
Hi There, Here is a segment of my sample data . Data is in text format. My Props.conf file has also been provided below. I have some issues to figure out what I would write in TIME_PREFIX for my PRO... See more...
Hi There, Here is a segment of my sample data . Data is in text format. My Props.conf file has also been provided below. I have some issues to figure out what I would write in TIME_PREFIX for my PROPS.Conf file (please see below). Any help will be highly appreciated, thank you. SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) CHARSET=UTF-8 TIME_PREFIX= TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=18   Thank you and Regards,
We were using Splunk with a web.conf file with port 18000 Suddenly the port got changed to the default 8000.  What could be the reason for this? Please Note: We see that the Splunk license is expi... See more...
We were using Splunk with a web.conf file with port 18000 Suddenly the port got changed to the default 8000.  What could be the reason for this? Please Note: We see that the Splunk license is expired. Has the license to do anything with the deletion of this web.conf file Or the default port? 
I am trying to compare count of events with previous days within business hours, here is my query     index=abc | search "userId:" | where date_hour>=9 AND date_hour<=17 | rex field=message "user... See more...
I am trying to compare count of events with previous days within business hours, here is my query     index=abc | search "userId:" | where date_hour>=9 AND date_hour<=17 | rex field=message "userId: (?<customerId>.*)" | timechart span=1h dc(customerId) as "Unique customer count" | timewrap d     I am trying to see the chart data only between 9AM to 5PM, but it is showing data (bar chart) on 24hr scale with blank before 9AM and after 5PM. How can I adjust the query or time picker to get the desired output ?
As the title suggests, I am after a method to do whois lookup for an IP in search results and append the whois data into the same results. I have found Network Toolkit app for this but its not Splun... See more...
As the title suggests, I am after a method to do whois lookup for an IP in search results and append the whois data into the same results. I have found Network Toolkit app for this but its not Splunk supported. Unfortunately, in the environment I am working in, they don't allow unsupported apps. Hence, if anyone knows any other way to achieve the above, please let me know. Thanks!