All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way of showing a warning to the user based on their SPL. My use case is that users should not generally search indexes which are fed into an accelerated data model. Specifically it's fas... See more...
Is there a way of showing a warning to the user based on their SPL. My use case is that users should not generally search indexes which are fed into an accelerated data model. Specifically it's faster and more accurate to search the network_traffic ADM than a firewall index.
Hi,  I am having the following query:  index=* sourcetype=CustomAccessLog | table "host", "source"   The output is: host source server32.de.db.com /path/to/server/instances/... See more...
Hi,  I am having the following query:  index=* sourcetype=CustomAccessLog | table "host", "source"   The output is: host source server32.de.db.com /path/to/server/instances/IFM_RT_1/logs/subdir_logs/log.file server31.de.db.com /path/to/server/instances/IFM_RT_2/logs/subdir_logs/log.file   I would need to alter the search query so that the output is becoming: host source 32 IFM_RT_1 31 IFM_RT_2   Tried using the following for the IFM_RT_ index=* sourcetype=CustomAccessLog | rex field=_raw "(?<IFM_RT_>.*)", but I couldn't get the needed data.  Can I have your help here?   Thanks!
I have column with Multiple Values separated by new line character Type is the column  ID     Type          Type_A 01     Type_B           Type_C I need new columns with values 1 or 0 for... See more...
I have column with Multiple Values separated by new line character Type is the column  ID     Type          Type_A 01     Type_B           Type_C I need new columns with values 1 or 0 for row (with ID 01) based on its presence for Type ID       Type            Type_A          Type_B       Type_C             Type_A           1                       1                    0 01      Type_B   02       Type_A              Type_B               Type_C
hello, I'm currently using Splunk enterprise with Udemy, but my license expired, and I can't go forward without renewing it my question is how do i go and renew the license if when i set Splunk up I ... See more...
hello, I'm currently using Splunk enterprise with Udemy, but my license expired, and I can't go forward without renewing it my question is how do i go and renew the license if when i set Splunk up I used the free version 
  Hi folks. I posted here recently exploring how to automatically add an index when docker-compose is bringing up the splunk container.  That post is at https://forums.docker.com/t/add-an-entrypoin... See more...
  Hi folks. I posted here recently exploring how to automatically add an index when docker-compose is bringing up the splunk container.  That post is at https://forums.docker.com/t/add-an-entrypoint-and-command-without-messing-up-the-invocation/123686 I had it working, but it no longer does, and I somewhat-suspect a docker volume problem as well as somewhat-suspect a permissions issue, and also somewhat suspect an OS upgrade.  But I really don't know what the problem is. Inside the splunk container, I see: [root@splunk splunk]# cat /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf [http] disabled = 0 [http://splunk_hec_token] disabled = 0 token = really-big-token-thingie Which is really not what I want. And outside the splunk container (on the MacOS side), I see: $ cat splunk-files/opt-splunk-etc-apps-splunk_httpinput-local/inputs.conf cmd output started 2022 Mon May 02 04:19:43 PM PDT [http] disabled = 0 [http://splunk_hec_token] disabled = 0 token = really-big-token-thingie index = dev_game-publishing That is what I want. In my docker-compose, I have (among other things) : volumes: - ./splunk-files/opt-splunk-etc-apps-splunk_httpinput-local/ /opt/splunk/etc/apps/splunk_httpinput/local/ (That long volume line is all-one-line.  It may or may not be wrapping when you view it, though it is wrapping in this editor) I tried both setting up a volume for the entire directory, as well as just that one file.    I'm hearing that doing an entire directory tends to be more reliable, but both failed the same way. The directory containing the file is owned by splunk and has restrictive permissions: [ansible@splunk splunk]$ cat /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf cat: /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf: Permission denied [ansible@splunk splunk]$ ls -l /opt/splunk/etc/apps/splunk_httpinput/ total 12 drwxr-xr-x 2 splunk splunk 4096 Jan 15 03:31 default drwx------ 2 splunk splunk 4096 May 2 22:14 local drwx------ 2 splunk splunk 4096 May 2 22:14 metadata [ansible@splunk splunk]$ Which explains why the ansible user can't cat it.  But is ansible painting itself into a corner and preventing itself from making all the changes I need? I also upgraded from MacOS 11.x to 12.3 in between when this was working, and when it stopped.  I don't know if that's related or not. I have next to no Splunk and even less Ansible. Thanks for any and all suggestions!
Hi. Has any one come across  hidden Double Quotes (") in a field and how to remove it? (maybe a "sed" regex) The double quotes don't appear in the Splunk Field or even in an excel csv export. I... See more...
Hi. Has any one come across  hidden Double Quotes (") in a field and how to remove it? (maybe a "sed" regex) The double quotes don't appear in the Splunk Field or even in an excel csv export. It only appears when you save the csv as a txt file.  Apparently Splunk sees it because it adds additional lines in my search. See below for Excel export and corresponding saved txt  It looks to be Quoting the contents of the AdminGroup field.   csv src_host AdminGroup computer1 Domain Admins LocalAdmins   Text file src_host        AdminGroup computer1 "Domain Admins                            LocalAdmins"  
hello splunkers,  I have build server using shuttle with VM ware Hyper-v i have splunk downloaded and installed on servers that i have created, i am able to use the ip addres on the splunk home p... See more...
hello splunkers,  I have build server using shuttle with VM ware Hyper-v i have splunk downloaded and installed on servers that i have created, i am able to use the ip addres on the splunk home page but when i put my login id and password i get this "The server encountered an unexpected condition which prevented it from fulfilling the request. Cl
Scenario: We have a data source of interest that we wish to analyze. The data source is hourly host activity events. An endpoint agent installed on a user's host monitors for specific events. The... See more...
Scenario: We have a data source of interest that we wish to analyze. The data source is hourly host activity events. An endpoint agent installed on a user's host monitors for specific events. The endpoint agent reports theses events to the central server, aka manager/collector. Then the central server sends the data/events to Splunk for ingest. We found that a distinct count of specific action events per hour per host is very interesting to us. If the hourly count per user is greater than "the normal behavior" average then we want to be alerted. We define normal behavior as the "90 day average of distinct hourly counts per host/user". We define an outlier/alert as an hourly distinct count above 2 standard deviations from the 90day hourly average. For instance, if the 90 day hourly average is 2 events for a host, then 10 events in a single hour for that host would fire an alert. We tried many different methods and found some anomalies. One issue is the events' arrival time to Splunk. Specifically, the data does not always arrive to Splunk in a consistent interval. The endpoint agent may be delayed in processing or sending the data to the central server if the network connection is lost or the running host was suspended/shutdown shortly after the events of interest occurred.  We have accepted this issue as its very infrequent. Methodology: In order to conduct our analysis we have multiple phases. Phase 1 > prepare the data and output to KVstore lookup We run a query to prime the historic data.         index=foo earliest=-90d@h latest=-1h@h foo_event=* host=* | timechart span=1h dc(foo_event) as Foo_Count by host limit=0 | untable _time host Foo_Count |outputlookup 90d-Foo_Coun         Then we modify and save the query to append the new data, we use the -2h@h and -1h@h to mitigate lagging events.  This report runs first every hour at minute=0.         index=foo earliest=-2@h latest=-1h@h foo_event=* host=* | timechart span=1h dc(foo_event) as Foo_Count by host limit=0 | untable _time host Foo_Count |outputlookup 90d-Foo_Count append=t          Phase 2 > calculate the upperBound for each user This report runs second every hour at minute=15.  We add additional statistics for investigation purposes.         |inputlookup 90d-Foo_Count |timechart span=1h values(Foo_Count) as Foo_Count by host limit=0 | untable _time host Foo_Count | stats min(Foo_Count) as Mini max(Foo_Count) as Maxi mean(Foo_Count) as Averg stdev(Foo_Count) as sdev median(Foo_Count) as Med mode(Foo_Count) as Mod range(Foo_Count) as Rng by host | eval upperBound=(Averg+sdev*exact(2)) | outputlookup Foo_Count-upperBound         Phase 3 > trim the oldest data to maintain a 90d@h interval This report runs third every hour at minute=30.         |inputlookup 90d-Foo_Count | eval trim_time = relative_time(now(),"-90d@h") | where _time>trim_time | convert ctime(trim_time) |outputlookup 90d-Foo_Count         Phase 4 > detect outliers This alert runs fourth (last) every hour the minute=45.         index=foo earliest=-1h@h latest=@h foo_event=* host=* | stats dc(foo_event) as as Foo_Count by host limit=0 | lookup Foo_Count-upperBound host output upperBound | eval isOutlier=if('Foo_Count' > upperBound, 1, 0)         This method is successful alerting on outliers. RE: event lag, we monitor and keep track of how significant. Originally, we tried using the MLTK with a DensityFunction and partial fit, however we have approximately 65 million data points which causes issues with the Smart Outlier Detection assistant. The question is whether anyone has a different or more efficient way to do this? Thank you for your time!  
I'm currently building a query that will pull data from today to April 26th,  the field value contains the following time format      termination_initiated (field value name) 2022-05-02T11:47... See more...
I'm currently building a query that will pull data from today to April 26th,  the field value contains the following time format      termination_initiated (field value name) 2022-05-02T11:47:01.011-07:00 2022-05-02T11:42:10.820-07:00      I'm currently trying to convert is so that i can only get results between today and April 26th. I've tried this piece of code with no luck     | eval terminiation_started=strptime(termination_initiated,"%Y-%m-%dT %H:%M:%S.%QZ") | where termination_started>=relative_time(now(),"-6d@d")    
while searching through all time  in filter  drop down, i am getting NaN value for "$tokLatest$", I don't know why its coming. For others like- week to date, month to date its coming fine. Only issue... See more...
while searching through all time  in filter  drop down, i am getting NaN value for "$tokLatest$", I don't know why its coming. For others like- week to date, month to date its coming fine. Only issue  is coming for All time.   I don't know why its coming. Below is the code snippets. Any solution for this??????? How  can we use if else case condition in case of NaN.  so that I can use now() in case of NaN. Any solution????????? <search> <query> |makeresults </query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <progress> <eval token="tokEarliest">strptime($job.earliestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> <eval token="tokLatest">strptime($job.latestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> <eval token="tokEarliest1">strftime(relative_time(tokEarliest,"-330m"),"%Y-%m-%d %H:%M:%S.%3N")</eval> <eval token="tokEarliest1">strftime(relative_time(tokLatest,"-330m"),"%Y-%m-%d %H:%M:%S.%3N")</eval> </progress> </search> <description>draft event ingestion rate by wfm at day or hour level</description> <fieldset submitButton="true" autoRun="false"> <input type="time" token="timepicker" searchWhenChanged="false"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset>
I have several fields I want to lump into 1 multivalue field and remove blanks. At the start of an event, there are up to 6 IP Addresses, either internal or external, but not both (they are the sou... See more...
I have several fields I want to lump into 1 multivalue field and remove blanks. At the start of an event, there are up to 6 IP Addresses, either internal or external, but not both (they are the source IP, plus any LB hops along the way). They get extracted to either internal_src_ip# or external_src_ip#.  If it is an internal IP, then the external_src_ip# will be "-", i.e. blank. If I run      | eval OriginIP2 = mvappend(internal_src_ip, external_src_ip, internal_src_ip2, external_src_ip2, internal_src_ip3, external_src_ip3, internal_src_ip4, external_src_ip4, internal_src_ip5, external_src_ip5, internal_src_ip6, external_src_ip6 ) | eval OriginIP2 = mvfilter( match( OriginIP2, "^(?!-)" ) )     I get exactly what I want. A multivalue list in the field "OriginIP2" with "-" removed. However putting it together in 1 line (to automate as a Calculated Field) gives me an error.     | eval OriginIP2 = mvfilter( match( mvappend(internal_src_ip, external_src_ip, internal_src_ip2, external_src_ip2, internal_src_ip3, external_src_ip3, internal_src_ip4, external_src_ip4, internal_src_ip5, external_src_ip5, internal_src_ip6, external_src_ip6 ), "^(?!-)") ) Error in 'eval' command: The arguments to the 'mvfilter' function are invalid.     As I read the docs, mvappend() should be returning a single mv field for match() to operate on, and then for match() to send to mvfilter().   What am I missing?
Hi everybody, I have the following problem and cannot seem to be able to wrap my head around it: I have a bunch of eventtypes (close to 1000). Some of those eventtypes have certain thresholds... See more...
Hi everybody, I have the following problem and cannot seem to be able to wrap my head around it: I have a bunch of eventtypes (close to 1000). Some of those eventtypes have certain thresholds which are greater than zero. I look the values up from a csv For a single host, I'd like to Chart the number of occurrances for an eventtype IF That number of occurrances is higher than the aforementioned threshold The chart shall also contain a static line depicting the threshold value Here is what I have so far. I believe I am always getting lost when using an aggregate function such as count() because added something to the result using eval just wont work.     index="my_index" eventtype=* host="$HOST_FROM_DROPDOWN$" | lookup my-events eventtype | eventstats count by eventtype | where alert_threshold > 0 AND count > alert_threshold | stats count by eventtype | eval Threshold = alert_threshold    What I do understand is that I have to add the "Threshold" variable in the overlay Options of the chart. Any help is much appreciated. Thank you
Hello, Does anyone opted Splunk Cloud workload pricing model? I would like to understand the pros and cons of opting this model. Please share your thoughts.   Thanks      
I get the following error in splunkd. Can anyone please help? ERROR DispatchReaper - Failed to reap $SPLUNK_HOME\var\run\splunk\dispatch\scheduler_1650877200_30023_310983C8-ADCC-4257-9A92-C56D31781... See more...
I get the following error in splunkd. Can anyone please help? ERROR DispatchReaper - Failed to reap $SPLUNK_HOME\var\run\splunk\dispatch\scheduler_1650877200_30023_310983C8-ADCC-4257-9A92-C56D31781CA1 because of Access is denied.  
Hi, have  SPL that generates months of data. I want subtract just the last two columns. The fields will change month to month, so I can't hard code. Given the below sample, how can I get lastMonthD... See more...
Hi, have  SPL that generates months of data. I want subtract just the last two columns. The fields will change month to month, so I can't hard code. Given the below sample, how can I get lastMonthDiff without hardcoding the field values? Thank you! Chris       | makeresults | eval "2202-01"=1 | eval "2202-02"=2 | eval "2202-03"=5 | eval "2202-04"=4 | append [| makeresults | eval "2202-01"=4 | eval "2202-02"=5 | eval "2202-03"=7 | eval "2202-04"=3 ] | append [| makeresults | eval "2202-01"=5 | eval "2202-02"=2 | eval "2202-03"=7 | eval "2202-04"=9 ] | fields - _time |foreach * [eval lastMonthDiff = '2202-03' - '2202-04']      
Hi all, My query has, .... | stats latest(time) as recent_event,latest(key) as recent_key, count by field1,field2 and the output has columns (order) like field1  field2  recent_event   recent... See more...
Hi all, My query has, .... | stats latest(time) as recent_event,latest(key) as recent_key, count by field1,field2 and the output has columns (order) like field1  field2  recent_event   recent_key  count (where count is obtained because of "count by") Is it possible to change the order of the columns recent_event      count       field1       recent_key      field2   
Hi at all, I have to configure Splunk Cloud to ingest AWS logs and it's the first time for me. I sow the Data Manager App and I think that ingestion from AWS should be very easy, the thing that I... See more...
Hi at all, I have to configure Splunk Cloud to ingest AWS logs and it's the first time for me. I sow the Data Manager App and I think that ingestion from AWS should be very easy, the thing that I haven't understood if there's some prerequisite in terms of apps. In other words, before enabling AWS Data Manager input, it's better to install TA_AWS or AWS_App or both or none of them? Then, have you some hint about indexes creation? Have you any additional attention point? Thank you in advance. Ciao. Giuseppe
Hello All, We are getting "Failed to load source for MC KPI Indicator visualization" error in Splunk Monitoring Console.  We tried to access it in another browser, cleared browser cache, res... See more...
Hello All, We are getting "Failed to load source for MC KPI Indicator visualization" error in Splunk Monitoring Console.  We tried to access it in another browser, cleared browser cache, restarted splunkd, but still some panels in DMC work and some show the above error. Does anyone have any idea on how to solve it? Many greetings, Justyna
Hello everyone! Currently I am integrating Splunk into our project, working with a local installation of Splunk Enterprise to test the waters and find my way around with Splunk itself. I am using ... See more...
Hello everyone! Currently I am integrating Splunk into our project, working with a local installation of Splunk Enterprise to test the waters and find my way around with Splunk itself. I am using the HttpEventCollectorSender class from the Splunk Package. My issue is the following: No matter in which format I send a message with the HEC Sender, I will always get the following exception:      Web Exception: Server Reply: {"text":"Error in handling indexed fields","code":15,"invalid-event-number":0} Response: StatusCode: 400, ReasonPhrase: 'Bad Request', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers: { Date: Mon, 02 May 2022 10:39:30 GMT X-Content-Type-Options: nosniff Vary: Authorization Connection: close X-Frame-Options: SAMEORIGIN Server: Splunkd Content-Type: application/json; charset=utf-8 Content-Length: 78 } HResult: -2146233088     The code that I use for sending is almost line by line from the HEC Tutorial from Splunk (Added some more Send-commands at the bottom to try out different formats)     var middleware = new HttpEventCollectorResendMiddleware(0); var ecSender = new HttpEventCollectorSender( new Uri("https://splunkserverdefaultcert:8088/"), <token>, null, HttpEventCollectorSender.SendMode.Sequential, 0, 0, 0, middleware.Plugin ); ecSender.Send(Guid.NewGuid().ToString(), "INFO", null, <message>); ecSender.Send(Guid.NewGuid().ToString(), "INFO", <message>); ecSender.Send(data: <message>); ecSender.Send(message: <message>); ecSender.Send(Guid.NewGuid().ToString(), "INFO", null, data: new { testProperty = "testing" }); ecSender.Send(data: new { testProperty = "testing" }); ecSender.FlushAsync().Start();     No matter how I format the message that I send, I will get the error that I mentioned above. Since the error seems to indicate a formatting issue, I already tried different formats of sending the message. Looking into the errors that are getting logged I can see how the actual message that is getting sent looks, so I can confirm that the following formats do not work:     {"time":"1651492587,089","event":{"data":"This is an event"}} {"time":"1651492587,089","event":{"message":"This is an event"}} {"time":"1651494076,162","event":{"id":"00588efd-f403-4cf7-95ce-4ef2a28b0f93","severity":"INFO","data":"This is an event"}} {"time":"1651494076,162","event":{"id":"00588efd-f403-4cf7-95ce-4ef2a28b0f93","severity":"INFO","message":"This is an event"}}     However, if I just do it with curl as follows, everything seems to work perfectly fine!     https://splunkserverdefaultcert:8088/services/collector/event/1.0 -k -H "Authorization: Splunk <token>" -d "{\"time\":\"1651492587\",\"event\":{\"data\":\"This_is_an_event\"}}"     Do you know what could be causing this error, and what I am doing wrong? Edit: I can now say that this happens also with other Splunk servers, not only with my local one. Curl works, but the HEC Service Implementation always throws the error mentioned above. If you have any ideas, I would be really thankful for some input!