All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, i am new to splunk, i need to find the number of days different between indexed time date and the field exists date , i first converted my field to epoc and finding the difference between printed... See more...
Hi, i am new to splunk, i need to find the number of days different between indexed time date and the field exists date , i first converted my field to epoc and finding the difference between printedA_epoch and _indextime (as it return epoc by default) but am getting return data as blank . and i need to assign a var to 1 if it is grater than 0  the printedtimestrampA data is "2020-06-20T01:23:23.693-0700" | eval printedA_epoch=strptime(printedtimestrampA,"%Y-%m-%dT%H:%M:%S.%Q") | eval indextime=_indextime | eval fdata=round(((_indextime-printedA_epoch)/86400),0) | eval daysA= if(fdata>0,1,0) | table _indextime,printedA_epoch,fdata
Please i need a script that can give result when there is an idle logger, or when the fowarder isnt feed any information
Greetings, I am new to Splunk and I have an assignment where I needed to extract data based on ticket number and time stamp for "Add Task" and "Resolve". A ticket contains both comment from inceptio... See more...
Greetings, I am new to Splunk and I have an assignment where I needed to extract data based on ticket number and time stamp for "Add Task" and "Resolve". A ticket contains both comment from inception to completion. Here is an example of my code; index=sperf_default source=prod.system.btds.ticket.updated.preproc (EB FIX VERIFY/DENY) activity_type="ADD TASK" | join ticket_number type=inner [ search index=sperf_default source=prod.system.btds.ticket.updated.preproc activity_type="resolve" ]  Thank you.  
We are using the Slack App for Splunk Addon to capture login and messages data . Slack:Logins are coming in fine however Slack:Messages are having a problem and no data is seen on the index. Upon ch... See more...
We are using the Slack App for Splunk Addon to capture login and messages data . Slack:Logins are coming in fine however Slack:Messages are having a problem and no data is seen on the index. Upon checking logs under internal (index=_internal source=*splunkd.log slack *error*)  we are seeing the following errors for slack_messages.py ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-slack/bin/slack_messages.py" ERRORlocal variable 'latest_message' referenced before assignment ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-slack/bin/slack_messages.py" ERROR: 'channels' Can anyone help about this issue. 
On startup the docker engine throws docker:       Error response from daemon: failed to initialize logging driver: strconv.ParseBool: parsing "": invalid syntax.       Options being passed: ... See more...
On startup the docker engine throws docker:       Error response from daemon: failed to initialize logging driver: strconv.ParseBool: parsing "": invalid syntax.       Options being passed:       --log-driver=splunk --log-opt splunk-token='xxxx' --log-opt splunk-url=https:/xxx:8088 --log-opt splunk-insecureskipverify       Docker info:  Client: Debug Mode: false Server: Containers: 2 Running: 0 Paused: 0 Stopped: 2 Images: 2 Server Version: 19.03.8 Storage Driver: overlay2 Backing Filesystem: <unknown> Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-1062.9.1.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.701GiB Name: xxxx ID: BSMB:4UAX:KKOR:3WJY:OJMN:2PVM:FR6V:LMSX:RBWJ:TEKE:KZQ6:XOD5 Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false  
As soon as I open the dashboard , a panel with a single value should be display ( the count ) and on clicking the single value a drilldown should be populated as show hide ( more details , such as ty... See more...
As soon as I open the dashboard , a panel with a single value should be display ( the count ) and on clicking the single value a drilldown should be populated as show hide ( more details , such as type of data , email id , etc .)
Hello, We have a dashboard where we toggle 7 days of date between single series and multi-series. Is there a way to name the Y-axis in the multi-series to the date?   Stay safe and healthy, ... See more...
Hello, We have a dashboard where we toggle 7 days of date between single series and multi-series. Is there a way to name the Y-axis in the multi-series to the date?   Stay safe and healthy,  you and yours. Thanks and God bless, Genesius
Hi i can not install ES 6.0 on SP 8.0.4.1 it have error while it is post install i install splunk fresh install,i don't have special config and don't install any add-on , it's default just i chan... See more...
Hi i can not install ES 6.0 on SP 8.0.4.1 it have error while it is post install i install splunk fresh install,i don't have special config and don't install any add-on , it's default just i change limit.conf and web.conf for prerequisites When it comes to "Installing new add-ons" field install why? log1 log2 log3 log4 log5 log6
Last week we upgraded our Splunk-cluster from version 7.3.5 to 7.3.6. Since that moment, alerts that are triggered no longer are able to send mail. The _internal index shows an event stating "ERROR ... See more...
Last week we upgraded our Splunk-cluster from version 7.3.5 to 7.3.6. Since that moment, alerts that are triggered no longer are able to send mail. The _internal index shows an event stating "ERROR sendemail:461 - 'rootCAPath' while sending mail to: xxx@xx" From other posts it seems to be required to add the list_settings capability to our user roles. However, prior to the update we have had no problems with alert mails without adding this capability to user roles. The release notes for version 7.3.6 don't mention any fix or change in this regard. Since the documentation is not quite clear about the impact of adding this capability to a user role (what additional possibilities are available to users with this capability) and this didn't seem to be required up until version 7.3.5 we would like to be sure this capability won't harm our setup
Hello, I'm installing a new splunk instance and need to connect it to our master license server. I used to do this from the web admin page, and learned that it is also possible to do from CLI like ... See more...
Hello, I'm installing a new splunk instance and need to connect it to our master license server. I used to do this from the web admin page, and learned that it is also possible to do from CLI like splunk edit licenser-localslave -master_uri 'https://master:port'  now the question: can I do this before starting splunk first time? The aim is fully automatize the installation, so no manual intervention would be required. What I also have in mind is keeping this setting in server.conf which I feed to splunk after the (rpm) installation and before starting it, but I feel that using CLI is more proper way to do this.  Thanks! Andrei
Hi all, We have 3 search heads are in cluster. serach head 1 is captain.Recently we upgraded to 7.2.3 to 8.0.3.after the upgrade i see that my search captain is in red and it showing search delayed ... See more...
Hi all, We have 3 search heads are in cluster. serach head 1 is captain.Recently we upgraded to 7.2.3 to 8.0.3.after the upgrade i see that my search captain is in red and it showing search delayed  error.none of the users are complained for any thing.But not sure why we see search head captain is in red and saying searches delayed.I am sure something is wrong , can some one please help with this. We tried restarting the and resync option.
I am completely new to Splunk. I understand the basics but am lost on where to start with the designing for and supporting the following scenario for Splunk (or any SEIM). I didn't see a Community Lo... See more...
I am completely new to Splunk. I understand the basics but am lost on where to start with the designing for and supporting the following scenario for Splunk (or any SEIM). I didn't see a Community Location for this type of question so feel free to direct me to the "Total Nube" section.   We run a multi-tenant cloud application and our customers who use Splunk want us to "Log to Splunk". Looking through the "Getting Data In" sections it is unclear to me how we would support Splunk. In our software we allow our tenant admins to preform configurations themselves. So my basic question is: As the developer of a cloud based app, how do we provide support for Splunk?  Do we "push" event info to a Splunk server that we store the endpoint information for each tenant separately? Do we create a REST endpoint that Splunk can pole on a specific frequency from each Splunk instance?  Bear in mind that we will have tens of customers configuring their tenants to work with their own servers. All the info I have found is geared toward configuring Splunk for my use for my team and not this multi-tenant scenario. Thanks in advance.
Hi, I am using a TCP input in splunk to receive WSUS data, gathered and pushed to splunk by a powershell script.  My question is if it is possible to use the same input, and override source type ... See more...
Hi, I am using a TCP input in splunk to receive WSUS data, gathered and pushed to splunk by a powershell script.  My question is if it is possible to use the same input, and override source type based on a field value in the received data? I have field called "datasource" in my data.
Hello, I have recently installed DLTK 2.3.0 on Splunk version 7.3.2.  MLTK 4.1.0 is already installed, and OS (CentOS) is running Docker 17.05.  Everything is running on the same box. App was insta... See more...
Hello, I have recently installed DLTK 2.3.0 on Splunk version 7.3.2.  MLTK 4.1.0 is already installed, and OS (CentOS) is running Docker 17.05.  Everything is running on the same box. App was installed correctly, but in the setup for Docker nothing happens when I click the "Save" button: Is there something I am missing?  I thought it might be permission related so I added the Splunk user to the docker group.  It didn't make any difference. Any help would be appreciated! Thanks in advance and best regards, Andrew
Hi,  I have a SQL job that exports a .csv table to our file server with one column of user names in the file. This job is running once a day at the morning and writing a new file every day with the ... See more...
Hi,  I have a SQL job that exports a .csv table to our file server with one column of user names in the file. This job is running once a day at the morning and writing a new file every day with the same name. Since Iv'e uploaded the file once, I can't see the changes of the new files in the next days.  Is there any option for me to monitor this file as a lookup and run a searches against the most recent data? Thank you,  Yossi.   
Hello, Hope everyone is keeping well. I noticed that Incident Investigation Feed does not include the URL, how can I add that field to this panel? Thank you for your anticipated assistance. Regar... See more...
Hello, Hope everyone is keeping well. I noticed that Incident Investigation Feed does not include the URL, how can I add that field to this panel? Thank you for your anticipated assistance. Regards, GrClEnt
Hi, When we used to run the following query host=spd1agd01 we used to get events till 29/08/2018. But when we the same query we don't get any results as it shows "no results found". We checked on ... See more...
Hi, When we used to run the following query host=spd1agd01 we used to get events till 29/08/2018. But when we the same query we don't get any results as it shows "no results found". We checked on host spd1agd01 and found that splunk forwarder was not installed,we installed but although we are not getting the results. We also checked at the following path: C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\scm_system.tmp but couldn't find anything . Please help us to fix this issue. @Anonymous     
Hi,  I have a table like below where multiple entries of same ticket numbers are displaying as these are taken from the logs receiving from the ticketing system.  Incidents Status Resolved Dat... See more...
Hi,  I have a table like below where multiple entries of same ticket numbers are displaying as these are taken from the logs receiving from the ticketing system.  Incidents Status Resolved Date Closed Date INC001 Assigned     INC001 Assigned     INC001 Resolved 1/5/2020 2/5/2020 INC001 Closed 1/5/2020 2/5/2020 INC002 Assigned     INC002 Resolved 8/5/2020   INC002 Resolved 8/5/2020   INC002 Closed 8/5/2020 10/5/2020 INC003 Assigned     INC003 Pending     INC004 Assigned     INC004 Assigned     INC004 Assigned     INC004 Resolved 15/05/2020   INC004 Closed 15/05/2020 22/05/2020 INC004 Closed 15/05/2020 22/05/2020   If the ticket is actually closed, I want to fill the closed date column of each incident with the actual closed date.   Incidents Status Resolved Date Closed Date INC001 Assigned   2/5/2020 INC001 Assigned   2/5/2020 INC001 Resolved 1/5/2020 2/5/2020 INC001 Closed 1/5/2020 2/5/2020 INC002 Assigned   10/5/2020 INC002 Resolved 8/5/2020 10/5/2020 INC002 Resolved 8/5/2020 10/5/2020 INC002 Closed 8/5/2020 10/5/2020 INC003 Assigned     INC003 Pending     INC004 Assigned   22/05/2020 INC004 Assigned   22/05/2020 INC004 Assigned   22/05/2020 INC004 Resolved 15/05/2020 22/05/2020 INC004 Closed 15/05/2020 22/05/2020 INC004 Closed 15/05/2020 22/05/2020   Can someone please help me.
Well , I want to create an alert which alert me whenever there is spike in Errors. Currently we are comparing say past 30m count with last 2 week same time same date and comparing with 2w average. Bu... See more...
Well , I want to create an alert which alert me whenever there is spike in Errors. Currently we are comparing say past 30m count with last 2 week same time same date and comparing with 2w average. But I want to create a near real time alert as it can be false positive this way.  My errors are like some are trending some come only at time of issues and some are like more during peak business hours and less during off business hours but I want to capture the real spikes like avoiding it to trigger when we move from non business to business hours. I was hoping if I can use predict command to do that but not clear with all algos and if that is right thing to use here.       index=rxc sourcetype="rxcapp" (level=ERROR) earliest=-30m@m latest=@m|rex "Id:\s*(?<Id>\d+)," | search [| inputlookup abc.csv | rename id as Id | fields Id]| lookup abc.csv id As Id OUTPUT site|bucket _time span=5m| stats count by _time error_msg site| predict lower95=lower upper95=upper algorithm=LLP5 count as predict| where count>'upper(predict)'|stats latest(count) by error_msg site       will this be helpful or this is wrong ? Can predict be used this way with stats command ?or any other suggestion on approach. 
Hi,    I am currently attempting to split the Date and Time from one field into 2 or more fields. I have read some of the questions and answers here, but to no avail.   I am working with Starbuck... See more...
Hi,    I am currently attempting to split the Date and Time from one field into 2 or more fields. I have read some of the questions and answers here, but to no avail.   I am working with Starbucks.csv, which shows the Date, Volume and Closing stock price of Starbucks. The Date format is in YYYY-MM-DD. My intention is to split the Date to Year, Month and Day Fields respectively.   I have seen some of the community answers and many proposed a simple method such as |eval YearNo=(Date, "%Y) for the Year field. However, I tried and the search simply did not return any new field, Below is a snippet of the attempt. I put Date and YearNo in the same table to show how YearNo was not extracted.   My next thought was that maybe splunk did not register the Date field as a date but merely as a string. I went ahead and plotted the Date vs Volume Chart on the visualization option and it does seem that Splunk registered the Date as date, and hence the plot was crafted nicely. The snippet is shown below.   I would greatly appreciate if someone could enlighten me on this situation and how can I extract the date to their individual fields.   Cheers, Lucas