All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

while searching through all time  in filter  drop down, i am getting NaN value for "$tokLatest$", I don't know why its coming. For others like- week to date, month to date its coming fine. Only issue... See more...
while searching through all time  in filter  drop down, i am getting NaN value for "$tokLatest$", I don't know why its coming. For others like- week to date, month to date its coming fine. Only issue  is coming for All time.   I don't know why its coming. Below is the code snippets. Any solution for this??????? How  can we use if else case condition in case of NaN.  so that I can use now() in case of NaN. Any solution????????? <search> <query> |makeresults </query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <progress> <eval token="tokEarliest">strptime($job.earliestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> <eval token="tokLatest">strptime($job.latestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> <eval token="tokEarliest1">strftime(relative_time(tokEarliest,"-330m"),"%Y-%m-%d %H:%M:%S.%3N")</eval> <eval token="tokEarliest1">strftime(relative_time(tokLatest,"-330m"),"%Y-%m-%d %H:%M:%S.%3N")</eval> </progress> </search> <description>draft event ingestion rate by wfm at day or hour level</description> <fieldset submitButton="true" autoRun="false"> <input type="time" token="timepicker" searchWhenChanged="false"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset>
I have several fields I want to lump into 1 multivalue field and remove blanks. At the start of an event, there are up to 6 IP Addresses, either internal or external, but not both (they are the sou... See more...
I have several fields I want to lump into 1 multivalue field and remove blanks. At the start of an event, there are up to 6 IP Addresses, either internal or external, but not both (they are the source IP, plus any LB hops along the way). They get extracted to either internal_src_ip# or external_src_ip#.  If it is an internal IP, then the external_src_ip# will be "-", i.e. blank. If I run      | eval OriginIP2 = mvappend(internal_src_ip, external_src_ip, internal_src_ip2, external_src_ip2, internal_src_ip3, external_src_ip3, internal_src_ip4, external_src_ip4, internal_src_ip5, external_src_ip5, internal_src_ip6, external_src_ip6 ) | eval OriginIP2 = mvfilter( match( OriginIP2, "^(?!-)" ) )     I get exactly what I want. A multivalue list in the field "OriginIP2" with "-" removed. However putting it together in 1 line (to automate as a Calculated Field) gives me an error.     | eval OriginIP2 = mvfilter( match( mvappend(internal_src_ip, external_src_ip, internal_src_ip2, external_src_ip2, internal_src_ip3, external_src_ip3, internal_src_ip4, external_src_ip4, internal_src_ip5, external_src_ip5, internal_src_ip6, external_src_ip6 ), "^(?!-)") ) Error in 'eval' command: The arguments to the 'mvfilter' function are invalid.     As I read the docs, mvappend() should be returning a single mv field for match() to operate on, and then for match() to send to mvfilter().   What am I missing?
Hi everybody, I have the following problem and cannot seem to be able to wrap my head around it: I have a bunch of eventtypes (close to 1000). Some of those eventtypes have certain thresholds... See more...
Hi everybody, I have the following problem and cannot seem to be able to wrap my head around it: I have a bunch of eventtypes (close to 1000). Some of those eventtypes have certain thresholds which are greater than zero. I look the values up from a csv For a single host, I'd like to Chart the number of occurrances for an eventtype IF That number of occurrances is higher than the aforementioned threshold The chart shall also contain a static line depicting the threshold value Here is what I have so far. I believe I am always getting lost when using an aggregate function such as count() because added something to the result using eval just wont work.     index="my_index" eventtype=* host="$HOST_FROM_DROPDOWN$" | lookup my-events eventtype | eventstats count by eventtype | where alert_threshold > 0 AND count > alert_threshold | stats count by eventtype | eval Threshold = alert_threshold    What I do understand is that I have to add the "Threshold" variable in the overlay Options of the chart. Any help is much appreciated. Thank you
Hello, Does anyone opted Splunk Cloud workload pricing model? I would like to understand the pros and cons of opting this model. Please share your thoughts.   Thanks      
I get the following error in splunkd. Can anyone please help? ERROR DispatchReaper - Failed to reap $SPLUNK_HOME\var\run\splunk\dispatch\scheduler_1650877200_30023_310983C8-ADCC-4257-9A92-C56D31781... See more...
I get the following error in splunkd. Can anyone please help? ERROR DispatchReaper - Failed to reap $SPLUNK_HOME\var\run\splunk\dispatch\scheduler_1650877200_30023_310983C8-ADCC-4257-9A92-C56D31781CA1 because of Access is denied.  
Hi, have  SPL that generates months of data. I want subtract just the last two columns. The fields will change month to month, so I can't hard code. Given the below sample, how can I get lastMonthD... See more...
Hi, have  SPL that generates months of data. I want subtract just the last two columns. The fields will change month to month, so I can't hard code. Given the below sample, how can I get lastMonthDiff without hardcoding the field values? Thank you! Chris       | makeresults | eval "2202-01"=1 | eval "2202-02"=2 | eval "2202-03"=5 | eval "2202-04"=4 | append [| makeresults | eval "2202-01"=4 | eval "2202-02"=5 | eval "2202-03"=7 | eval "2202-04"=3 ] | append [| makeresults | eval "2202-01"=5 | eval "2202-02"=2 | eval "2202-03"=7 | eval "2202-04"=9 ] | fields - _time |foreach * [eval lastMonthDiff = '2202-03' - '2202-04']      
Hi all, My query has, .... | stats latest(time) as recent_event,latest(key) as recent_key, count by field1,field2 and the output has columns (order) like field1  field2  recent_event   recent... See more...
Hi all, My query has, .... | stats latest(time) as recent_event,latest(key) as recent_key, count by field1,field2 and the output has columns (order) like field1  field2  recent_event   recent_key  count (where count is obtained because of "count by") Is it possible to change the order of the columns recent_event      count       field1       recent_key      field2   
Hi at all, I have to configure Splunk Cloud to ingest AWS logs and it's the first time for me. I sow the Data Manager App and I think that ingestion from AWS should be very easy, the thing that I... See more...
Hi at all, I have to configure Splunk Cloud to ingest AWS logs and it's the first time for me. I sow the Data Manager App and I think that ingestion from AWS should be very easy, the thing that I haven't understood if there's some prerequisite in terms of apps. In other words, before enabling AWS Data Manager input, it's better to install TA_AWS or AWS_App or both or none of them? Then, have you some hint about indexes creation? Have you any additional attention point? Thank you in advance. Ciao. Giuseppe
Hello All, We are getting "Failed to load source for MC KPI Indicator visualization" error in Splunk Monitoring Console.  We tried to access it in another browser, cleared browser cache, res... See more...
Hello All, We are getting "Failed to load source for MC KPI Indicator visualization" error in Splunk Monitoring Console.  We tried to access it in another browser, cleared browser cache, restarted splunkd, but still some panels in DMC work and some show the above error. Does anyone have any idea on how to solve it? Many greetings, Justyna
Hello everyone! Currently I am integrating Splunk into our project, working with a local installation of Splunk Enterprise to test the waters and find my way around with Splunk itself. I am using ... See more...
Hello everyone! Currently I am integrating Splunk into our project, working with a local installation of Splunk Enterprise to test the waters and find my way around with Splunk itself. I am using the HttpEventCollectorSender class from the Splunk Package. My issue is the following: No matter in which format I send a message with the HEC Sender, I will always get the following exception:      Web Exception: Server Reply: {"text":"Error in handling indexed fields","code":15,"invalid-event-number":0} Response: StatusCode: 400, ReasonPhrase: 'Bad Request', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers: { Date: Mon, 02 May 2022 10:39:30 GMT X-Content-Type-Options: nosniff Vary: Authorization Connection: close X-Frame-Options: SAMEORIGIN Server: Splunkd Content-Type: application/json; charset=utf-8 Content-Length: 78 } HResult: -2146233088     The code that I use for sending is almost line by line from the HEC Tutorial from Splunk (Added some more Send-commands at the bottom to try out different formats)     var middleware = new HttpEventCollectorResendMiddleware(0); var ecSender = new HttpEventCollectorSender( new Uri("https://splunkserverdefaultcert:8088/"), <token>, null, HttpEventCollectorSender.SendMode.Sequential, 0, 0, 0, middleware.Plugin ); ecSender.Send(Guid.NewGuid().ToString(), "INFO", null, <message>); ecSender.Send(Guid.NewGuid().ToString(), "INFO", <message>); ecSender.Send(data: <message>); ecSender.Send(message: <message>); ecSender.Send(Guid.NewGuid().ToString(), "INFO", null, data: new { testProperty = "testing" }); ecSender.Send(data: new { testProperty = "testing" }); ecSender.FlushAsync().Start();     No matter how I format the message that I send, I will get the error that I mentioned above. Since the error seems to indicate a formatting issue, I already tried different formats of sending the message. Looking into the errors that are getting logged I can see how the actual message that is getting sent looks, so I can confirm that the following formats do not work:     {"time":"1651492587,089","event":{"data":"This is an event"}} {"time":"1651492587,089","event":{"message":"This is an event"}} {"time":"1651494076,162","event":{"id":"00588efd-f403-4cf7-95ce-4ef2a28b0f93","severity":"INFO","data":"This is an event"}} {"time":"1651494076,162","event":{"id":"00588efd-f403-4cf7-95ce-4ef2a28b0f93","severity":"INFO","message":"This is an event"}}     However, if I just do it with curl as follows, everything seems to work perfectly fine!     https://splunkserverdefaultcert:8088/services/collector/event/1.0 -k -H "Authorization: Splunk <token>" -d "{\"time\":\"1651492587\",\"event\":{\"data\":\"This_is_an_event\"}}"     Do you know what could be causing this error, and what I am doing wrong? Edit: I can now say that this happens also with other Splunk servers, not only with my local one. Curl works, but the HEC Service Implementation always throws the error mentioned above. If you have any ideas, I would be really thankful for some input!
Hello Everyone, I'm trying to analyze data from a jboss server, http request and respons dumps.   An "event" in the Jboss logs looks like this:    ================================================... See more...
Hello Everyone, I'm trying to analyze data from a jboss server, http request and respons dumps.   An "event" in the Jboss logs looks like this:    ============================================================== 2022-04-29 11:42:54,280 INFO [io.undertow.request.dump] (default task-25) ----------------------------REQUEST--------------------------- URI=/auth/realms/XXXX/protocol/openid-connect/token characterEncoding=null contentLength=105 contentType=[application/x-www-form-urlencoded] header=Accept=application/json header=Front-End-Https=On header=Connection=Keep-Alive header=X-Forwarded-Proto=https header=RequestID=XXXXX-e05b-4a83-8a0b-2e6daf84b039 header=X-Forwarded-For=xx.46.xx.242 header=Content-Type=application/x-www-form-urlencoded header=Content-Length=105 header=Host=XXX.YYY.ZZ locale=[] method=POST protocol=HTTP/1.1 queryString= remoteAddr=/xx.46.xx.242:0 remoteHost=xxx.webhost.xxxx.com scheme=https host=xxx.yyy.zz serverPort=8443 isSecure=true body= grant_type=client_credentials scope=XYZ client_id=0XXXea client_secret=3XXXXXXXXXX85a64ba80059e0143ee --------------------------RESPONSE-------------------------- contentLength=1373 contentType=application/json header=Cache-Control=no-store header=Set-Cookie=KC_RESTART=; Version=1; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Max-Age=0; Path=/auth/realms/XXXX/; Secure; HttpOnly header=X-XSS-Protection=1; mode=block header=Pragma=no-cache header=X-Frame-Options=ALLOW-FROM http://localhost:4200 header=Referrer-Policy=no-referrer header=Server-Timing=intid;desc=66e921155be4dfbd header=Date=Fri, 29 Apr 2022 09:41:39 GMT header=Connection=keep-alive header=Strict-Transport-Security=max-age=31536000; includeSubDomains header=X-Content-Type-Options=nosniff header=Content-Type=application/json header=Content-Length=1373 status=200  The problem I have, that this gets translated to 6-9 line of events by the time I see it in the Search:   This makes it really difficult to search for particular Requests. For example if I want filter Requests with a specific "scope=" field. Or without any.  I have no control over the forwarding of the data, so I have to make this work on the Search side. My idea so far is to group the events that happened in the same time, because every event corresponding one Request has the same timestamp. The closest I find about this is  | transaction maxspan=1s which is not accurate enough, I could have multiple events in a second.  Any better idea here?   If I overcome this, my next problem is how to search in the resulted list.  I cannot add the filters before the grouping like this: index=someindex_prod_events sourcetype=openshift_logs NOT "scope=xyz"| transaction maxspan=1s because the filter gets calculated first, then it gets grouped. My goal is the other way around. Any help would be appreciated !  
Hi All, I have configured inputs to monitor a file path but no events visible in Splunk.   Checked internal index and found the below error      
Hi everyone !   As an intern for an engineer degree, I have to make a stat of the art around Windows logs and how it is used with Splunk among others.  So here is my question, what are you doing u... See more...
Hi everyone !   As an intern for an engineer degree, I have to make a stat of the art around Windows logs and how it is used with Splunk among others.  So here is my question, what are you doing usually with Windows logs, which piece of information do you get back and what is the purpose?  Thank you in advance for your answers! Regards, Antoine
We have log files generated in the Linux server.  We want to push it into Splunk based on a regular time interval automatically.  Thanks in advance
Can anyone help to plot a line chart , x-axis with a non Zero Value. like the below image.  you can see the graph is starting from 1994-95. this is what i required. any one can p... See more...
Can anyone help to plot a line chart , x-axis with a non Zero Value. like the below image.  you can see the graph is starting from 1994-95. this is what i required. any one can please help us. Thankyou.  
I'm going to the page below and selecting Windows OS,  I'm then redirected to the download page. I get the error: There was an error loading this page Please try again in some minustes. I'm ... See more...
I'm going to the page below and selecting Windows OS,  I'm then redirected to the download page. I get the error: There was an error loading this page Please try again in some minustes. I'm tried on different browsers and another computer but still not working. Anyone let me now, how can I download it? 
Hi, I'm collecting logs from a s3 path using "Splunk Add-on for Amazon Web Services". I want to extract a field from the s3 path string. I was able to do it using this expression `rex field=source ... See more...
Hi, I'm collecting logs from a s3 path using "Splunk Add-on for Amazon Web Services". I want to extract a field from the s3 path string. I was able to do it using this expression `rex field=source "[.]*\/batch_id=(?<batch_id>[0-9]*)\/[.]*"`. How can I do this using Field Extraction in Splunk Cloud so that it automatically extracts this field at search time? Thanks
query to find out activity towards a particular URL eg: URL - https://www.microsoft.com/en-us/security
I keep getting this every time I try to download the 60 day trial.  Why? I have made an account, verified email and tried to download on several browsers to all get the same message.  Please help as ... See more...
I keep getting this every time I try to download the 60 day trial.  Why? I have made an account, verified email and tried to download on several browsers to all get the same message.  Please help as I need this for my IT course and they only direct us to the site.  I have tried calling the phone numbers to just be thrown in a loop in the options and even when going to sales they just say no one is available.  For such and apparently needed application, there is no information or help in regard with this issue.  I have also cleared cache, tried different pc's and all the same result.  I hope someone knows what this is because of as even the cyber data lecturer cant answer it.
Hi All, I need to correlate data from 2 different Indexes wherein the field name is common.   Index=idx1  ( This index has general user info)  Field Name:  sys_created_by Value: <email id of ... See more...
Hi All, I need to correlate data from 2 different Indexes wherein the field name is common.   Index=idx1  ( This index has general user info)  Field Name:  sys_created_by Value: <email id of the user> Other fields in idx1 of interest: login_time Index=idx2  ( This is the Index which has URLs accessed by the user) Field Name:  sys_created_by Value: <email id of the user> The url  information is stored in a field called "url" in idx2. Use case is to take the sys_created_by field from IDX1  and lookup/search for all urls  in IDX2  accessed by  the   sys_created_by coming from idx1.    I cannot rely on sys_created_by field from idx2 alone as it doesn't have all the other user attributes that are in IDX1 such as login_time.   Hence i need to correlate data across the two indexes. Do i need to do which will merge the sys_created_by from both indexes ?   eval common_field = coalesce(sys_created_by, sys_created_by)   I tried something like :   (index=idx1 sys_created_by!="") OR (index=idx2 sys_created_by!="" url!="") | stats values(url) values(login_time) BY sys_created_by   But this doesn't show results as expected.  Is there a way to reference my common field like shown below in BY ;  to tell Splunk which idx it needs to refer ?   | stats values(url), values(login_time) BY ( idx2.sys_created_by)