All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could re... See more...
I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could resolve it by deleting the app and reinstalling it but even after doing that it is still showing the FMC app. Has anyone seen this before? I tried looking for other posts with this issue but my search is coming up short.
Hi @msarkaus , after a stats command, you have only the fields in the stats command, so you don't have yet the _time field, in affirion, if you use the list option in the stats command you probably... See more...
Hi @msarkaus , after a stats command, you have only the fields in the stats command, so you don't have yet the _time field, in affirion, if you use the list option in the stats command you probably have too many values, so try values instead list, try something like this: index blah blah | eval msgTxt=substr(msgTxt, 1, 141) | stats vaues(_time) as DateTime values(msgTxt) as Message values(polNbr) as QuoteId BY tranId | eval DateTime=strftime(DateTime , "%m-%d-%Y %I:%M:%S %p") | streamstats count as log by tranId | eval tranId=if(log=1,tranId,"") | fields - log Ciao. Giuseppe  
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I... See more...
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I think I figured out how I want to display the logs, but I can't get the datetime format to correctly display. index blah blah | eval msgTxt=substr(msgTxt, 1, 141) | stats list(_time) as DateTime list(msgTxt) as Message list(polNbr) as QuoteId by tranId | eval time=strftime(_time," %m-%d-%Y %I:%M:%S %p") | streamstats count as log by tranId | eval tranId=if(log=1,tranId,"") | fields - log   Please help with displaying date and time format. Thanks 
Go to your cluster master/manager and deploy the app with props.conf from the master-apps.  For example: [my_json] SHOULD_LINE_MERGE = false LINE_BREAKER = (?:,)([\r\n]+)) TIME_FORMAT = %2Y%m%d%... See more...
Go to your cluster master/manager and deploy the app with props.conf from the master-apps.  For example: [my_json] SHOULD_LINE_MERGE = false LINE_BREAKER = (?:,)([\r\n]+)) TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 You can edit props.conf in $SPLUNK_HOME/etc/master-apps/_cluster/local/props.conf on master and push cluster-bundle with command 'splunk apply cluster-bundle'. Peers will restart and props.conf, in $SPLUNK_HOME/etc/slave-apps/_cluster/local/props.conf, will be layered when splunkd start. https://conf.splunk.com/files/2017/slides/pushing-configuration-bundles-in-an-indexer-cluster.pdf   Go to your search head and place the props.conf and restart your search head for the field extractions  [my_json] KV_MODE = json   Remember to be careful if you are updating all these on the production, based on the changes it will require the restart of indexers. please be cautious on the changes.   If you need more hands-on support, we have splunk ondemand services who can guide you through this process and shoulder surf your requirements help you.
| eval request='msg.service'." ".method." ".requestURI." ".responseCode | table request Count
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRE... See more...
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRESPONSE_TIME:(?<responseTime>.*)\sms" | stats count by msg.service,method, requestURI, responseCode | sort -count Result Table   msg.service method requestURI responseCode Count serviceA GET /v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST /v1/service/a 200  91   Under Visualization, I am trying to change this as a bar chart. I am getting all four fields on the x-axis. msg.service is mapped with count, and responseCode is mapped with responseCode. The other 2 fields are not visible since they are non-numeric fields.  if I remove fields using the following I get the proper chart (just msg.service mapped with count) my query | fields -responseCode, method, reqeustURI But I need something like this on the x and y axis x axis y axis serviceA GET v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST/v1/service/a 200  91   How to achieve this?  
I don't think you are doing anything wrong, it looks like a bug to me.
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to us... See more...
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to use to get the week of the year (1 to 52) is %V. This works on any search query, but this is not working when used in a <init> tag of a dashboard. This is my <init>: <form version="1.1" theme="dark"> <init> <eval token="todayYear">strftime(now(), "%Y")</eval> <eval token="todayMonth">strftime(now(), "%m")</eval> <eval token="todayWeek">strftime(now(), "%V")</eval> <eval token="yearToken">strftime(now(), "%Y")</eval> <eval token="monthToken">strftime(now(), "%m")</eval> </init> ... All these tokens are well initialized except to todayWeek, which refers to %V variable, which take no value.  What am I doing wrong?
appendpipe is processing all the events in the events pipeline. The second appendpipe has two events to process, the first one, which has no value for total1 so null+1=null (this is the third event),... See more...
appendpipe is processing all the events in the events pipeline. The second appendpipe has two events to process, the first one, which has no value for total1 so null+1=null (this is the third event), and the second which has a value of 5 so 5+5=6 (this is the fourth event)
JSON is a structure that does not require any specific order of key.  If your downstream application has this requirement, they are noncompliant to the standard.  You don't have to make any change.  ... See more...
JSON is a structure that does not require any specific order of key.  If your downstream application has this requirement, they are noncompliant to the standard.  You don't have to make any change.  Demand that your downstream developer make change.
Hi,  i'm trying to learn how appendpipe works, to do that i've tried to do this dummy search, and i don't understand why appendpipe returns the highlighted row.    
If your system doesn't accept text DoW denotation such as Mon, Tue, you can use numeric.  In most systems, week starts from Sunday as 0. 0 0 * * 1/2 run my alert Here is from man 5 crontab Step va... See more...
If your system doesn't accept text DoW denotation such as Mon, Tue, you can use numeric.  In most systems, week starts from Sunday as 0. 0 0 * * 1/2 run my alert Here is from man 5 crontab Step values can be used in conjunction with ranges. Following a range with ``/<number>'' specifies skips of the number's value through the range. For example, ``0-23/2'' can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is ``0,2,4,6,8,10,12,14,16,18,20,22''). Steps are also permitted after an asterisk, so if you want to say ``every two hours'', just use ``*/2''. (Of course, my manpage also states  day of week 0-7 (0 or 7 is Sun, or use names)  
so if I am running 9.3.1 and Tenable is still flagging this what was the solution or is there a fix for this not to show up in the scan?
The SmartStore cache has no effect on the freezing of data. If the cache becomes full, data will be evicted from the cache to make room for new data.
I have same problem with Amazon Linux 2023 (Kernel 6.1) the reason is Splunk Enterprise not support Linux kernel 6 and I notice Splunk have no problem with Amazon Linux 2023 (Kernel 5).   The rea... See more...
I have same problem with Amazon Linux 2023 (Kernel 6.1) the reason is Splunk Enterprise not support Linux kernel 6 and I notice Splunk have no problem with Amazon Linux 2023 (Kernel 5).   The reason is Splunk does not support the OS (Linux Kernel) that you used.  
Restarting did get rid of the message. Thanks for the reminder that a restart is necessary after every change.
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them l... See more...
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them live.   Are these apps and their content documented somewhere? So far, I have not had any luck.   Thanks!
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: E... See more...
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: Error while fitting "StateSpaceForecast" model: timestamps not continuous: at least 33 missing rows, the earliest between "2024-01-20 07:00:00" and "2024-01-20 09:00:00", the latest between "2024-10-02 06:00:00" and "2024-10-02 06:00:01"     That seemed pretty straight forward, I thought we might be missing some timestamp values. This is the query we are running:     |inputlookup gslb_query_last505h.csv | fit StateSpaceForecast "numRequests" holdback=24 forecast_k=48 conf_interval=90 output_metadata=true period=120     Looking into the CSV file itself, I went to look for missing values under the numRequests column. We have values for each hour going back for almost a year. The timestamps mentioned in the error look like: Looking at that SS now, There is an hour missing there. The timestamp for 08:00. That may be the cause. How would I go about efficiently finding the 33 missing values? Each value missing would be in-between any two hours. Will I have to go through and find skipped hours among 8k results in the CSV file?    Thanks for any help. 
It captured 265 lines with a total of 26,591 characters.  The original sp text size is 38,841. 
You may find this link helpful: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/ChecktheintegrityofyourSplunksoftwarefiles