All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring... See more...
I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring console log things like 
@sainag_splunk Oh okay!  where does adding in the Time range come in? or how is it linked to the panel's search?  
Yes, You can also use the | loadjob command directly in the search in Dashboard Studio if you're trying to load up saved searches.  I can take a look when I'm on my computer about the issue, p... See more...
Yes, You can also use the | loadjob command directly in the search in Dashboard Studio if you're trying to load up saved searches.  I can take a look when I'm on my computer about the issue, please share your json code.   
@sainag_splunk  Correct me if I'm wrong but that doc is with Classical Dashboard where it uses XML code we are using Dashboard Studio that works with JSON code.
Hi @PickleRick , I tested your suggestion and it worked. Thank you for your help. 1) I added one more case where the IP has an empty name.   I added condition in the where clause (dc=0) and it work... See more...
Hi @PickleRick , I tested your suggestion and it worked. Thank you for your help. 1) I added one more case where the IP has an empty name.   I added condition in the where clause (dc=0) and it worked.  I am afraid if I used isnull(name), sometimes it contains " " (empty string). Please let me know if this is doable 2)  Is it possible to do this without using eventstat?       I have already used eventstats in the search, but for a different field      Will that cause any delays or issues?   Did you ever use multiple eventstats in your search?    Thank you so much for your help ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0 1.1.1.7   location-7 | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4 1.1.1.5, name0, location-0 1.1.1.6, name0, location-0 1.1.1.7,,location-7" | eventstats dc(name) AS dc BY ip | where name!="name0" OR dc=0 OR (name=="name0" AND dc=1)  
Yes. Define exception in Nessus.
Do you have a heavy forwarder in your environment to install this add-on,  this is  a modular input on a heavy forwarder, please disable this on the search head and install this on one of your heavy ... See more...
Do you have a heavy forwarder in your environment to install this add-on,  this is  a modular input on a heavy forwarder, please disable this on the search head and install this on one of your heavy forwarder.    
Hello, We are using Splunk Enterprise version 9.1.2. Yes that is the correct app we are trying to use and I verified that the visibility is enabled.
Hello, looks like an issue with app/TA UI visibility. I have seen issues like this whenever there is TA with the missing config. Are you trying to use: https://splunkbase.splunk.com/app/3681 ? is th... See more...
Hello, looks like an issue with app/TA UI visibility. I have seen issues like this whenever there is TA with the missing config. Are you trying to use: https://splunkbase.splunk.com/app/3681 ? is this Splunk Enterprise or Cloud? What Version? Can you please go to Manage Apps > Your app > Edit Properties > Visible  > Just to make sure.     Thanks  
I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could re... See more...
I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could resolve it by deleting the app and reinstalling it but even after doing that it is still showing the FMC app. Has anyone seen this before? I tried looking for other posts with this issue but my search is coming up short.
Hi @msarkaus , after a stats command, you have only the fields in the stats command, so you don't have yet the _time field, in affirion, if you use the list option in the stats command you probably... See more...
Hi @msarkaus , after a stats command, you have only the fields in the stats command, so you don't have yet the _time field, in affirion, if you use the list option in the stats command you probably have too many values, so try values instead list, try something like this: index blah blah | eval msgTxt=substr(msgTxt, 1, 141) | stats vaues(_time) as DateTime values(msgTxt) as Message values(polNbr) as QuoteId BY tranId | eval DateTime=strftime(DateTime , "%m-%d-%Y %I:%M:%S %p") | streamstats count as log by tranId | eval tranId=if(log=1,tranId,"") | fields - log Ciao. Giuseppe  
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I... See more...
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I think I figured out how I want to display the logs, but I can't get the datetime format to correctly display. index blah blah | eval msgTxt=substr(msgTxt, 1, 141) | stats list(_time) as DateTime list(msgTxt) as Message list(polNbr) as QuoteId by tranId | eval time=strftime(_time," %m-%d-%Y %I:%M:%S %p") | streamstats count as log by tranId | eval tranId=if(log=1,tranId,"") | fields - log   Please help with displaying date and time format. Thanks 
Go to your cluster master/manager and deploy the app with props.conf from the master-apps.  For example: [my_json] SHOULD_LINE_MERGE = false LINE_BREAKER = (?:,)([\r\n]+)) TIME_FORMAT = %2Y%m%d%... See more...
Go to your cluster master/manager and deploy the app with props.conf from the master-apps.  For example: [my_json] SHOULD_LINE_MERGE = false LINE_BREAKER = (?:,)([\r\n]+)) TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 You can edit props.conf in $SPLUNK_HOME/etc/master-apps/_cluster/local/props.conf on master and push cluster-bundle with command 'splunk apply cluster-bundle'. Peers will restart and props.conf, in $SPLUNK_HOME/etc/slave-apps/_cluster/local/props.conf, will be layered when splunkd start. https://conf.splunk.com/files/2017/slides/pushing-configuration-bundles-in-an-indexer-cluster.pdf   Go to your search head and place the props.conf and restart your search head for the field extractions  [my_json] KV_MODE = json   Remember to be careful if you are updating all these on the production, based on the changes it will require the restart of indexers. please be cautious on the changes.   If you need more hands-on support, we have splunk ondemand services who can guide you through this process and shoulder surf your requirements help you.
| eval request='msg.service'." ".method." ".requestURI." ".responseCode | table request Count
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRE... See more...
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRESPONSE_TIME:(?<responseTime>.*)\sms" | stats count by msg.service,method, requestURI, responseCode | sort -count Result Table   msg.service method requestURI responseCode Count serviceA GET /v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST /v1/service/a 200  91   Under Visualization, I am trying to change this as a bar chart. I am getting all four fields on the x-axis. msg.service is mapped with count, and responseCode is mapped with responseCode. The other 2 fields are not visible since they are non-numeric fields.  if I remove fields using the following I get the proper chart (just msg.service mapped with count) my query | fields -responseCode, method, reqeustURI But I need something like this on the x and y axis x axis y axis serviceA GET v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST/v1/service/a 200  91   How to achieve this?  
I don't think you are doing anything wrong, it looks like a bug to me.
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to us... See more...
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to use to get the week of the year (1 to 52) is %V. This works on any search query, but this is not working when used in a <init> tag of a dashboard. This is my <init>: <form version="1.1" theme="dark"> <init> <eval token="todayYear">strftime(now(), "%Y")</eval> <eval token="todayMonth">strftime(now(), "%m")</eval> <eval token="todayWeek">strftime(now(), "%V")</eval> <eval token="yearToken">strftime(now(), "%Y")</eval> <eval token="monthToken">strftime(now(), "%m")</eval> </init> ... All these tokens are well initialized except to todayWeek, which refers to %V variable, which take no value.  What am I doing wrong?
appendpipe is processing all the events in the events pipeline. The second appendpipe has two events to process, the first one, which has no value for total1 so null+1=null (this is the third event),... See more...
appendpipe is processing all the events in the events pipeline. The second appendpipe has two events to process, the first one, which has no value for total1 so null+1=null (this is the third event), and the second which has a value of 5 so 5+5=6 (this is the fourth event)
JSON is a structure that does not require any specific order of key.  If your downstream application has this requirement, they are noncompliant to the standard.  You don't have to make any change.  ... See more...
JSON is a structure that does not require any specific order of key.  If your downstream application has this requirement, they are noncompliant to the standard.  You don't have to make any change.  Demand that your downstream developer make change.
Hi,  i'm trying to learn how appendpipe works, to do that i've tried to do this dummy search, and i don't understand why appendpipe returns the highlighted row.