All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am working on a dashboard, where I have to display the timelines for multiple dates. Relase In ST(Start Date) In ST(End Date) In RT(Start Date) In RT(End Date) In ET(Start Dat... See more...
I am working on a dashboard, where I have to display the timelines for multiple dates. Relase In ST(Start Date) In ST(End Date) In RT(Start Date) In RT(End Date) In ET(Start Date) In ET(End Date) 22.1             22.2 03/01/2022 20/01/2022 25/01/2022 02/02/2022 03/02/2022 11/02/2022 22.3 24/01/2022 10/02/2022 16/02/2022 23/02/2022 24/02/2022 04/03/2022 22.4 16/02/2022 03/03/2022 08/03/2022 16/03/2022 17/03/2022 03/03/2022   The dates are as above, I managed display the timeline for 2 dates but when I am incorporating multiple dates, the dashboard gets distorted this is what I want. This is what I have implemented. This is my search. | rename "PR_Go_Live" as In_PR "In_ST_Start Date" as ST_Start_Date "In_ST_End Date" as ST_End_Date "In_ST_End Date" as RT_End_Date | eval start = strptime(ST_Start_Date, "%d/%m/%Y") | eval end = strptime(In_PR, "%d/%m/%Y") | eval duration = (end - start) * 1000 | stats count by start ST_End_Date ST_End_Date duration Release | table start Release ST_End_Date duration      
I am trying to download vulnerability report for a 1000 hosts. Instead of providing them in the splunk query. I thought of uploading them as a csv format and fetch the data. is it possible in splunk?... See more...
I am trying to download vulnerability report for a 1000 hosts. Instead of providing them in the splunk query. I thought of uploading them as a csv format and fetch the data. is it possible in splunk? 
Hi,guys I found that the data transmitted by my security device was inconsistent with the amount searched on search. When I checked the cause, I found a large number of similar error logs in splunk... See more...
Hi,guys I found that the data transmitted by my security device was inconsistent with the amount searched on search. When I checked the cause, I found a large number of similar error logs in splunkd.log file of Indexer server, and the error contents were as follows: (08-10-2022 18:01:52.492 +0800 ERROR HttpInputDataHandler - Failed processing HTTP input, token name=****_traffic, Channel =n/a, source_IP=1*.*.*, reply=10, events_processed=0, http_input_body_size=2014), what is the cause of this and how can I solve this problem? Thank you for any help, every suggestion may be very helpful to me. thank you!
Hi, I have a bunch of failure events of different api endpoints. The field is called RequestPath and some examples are: /v1/locations/45BH-JGN /v1/exceptions/ABS/12 /v1/exceptions/ODD/13 ... See more...
Hi, I have a bunch of failure events of different api endpoints. The field is called RequestPath and some examples are: /v1/locations/45BH-JGN /v1/exceptions/ABS/12 /v1/exceptions/ODD/13 /v2/absence/100 Basically, I am trying to extract only the endpoints without the ids, so that I can get a count of which endpoints are failing, example /v1/locations/ --- 1 failure /v1/exceptions/ABS/  ----- 4 failures /v1/exceptions/ODD/ ---- 10 failures , etc. How can I do the same?    
Hello! New to splunk. Trying to make a dashboard to find Change tickets in our enviornment to help with outage diagnostics. Long story short, I'd like to have the Time Range Picker apply to fields... See more...
Hello! New to splunk. Trying to make a dashboard to find Change tickets in our enviornment to help with outage diagnostics. Long story short, I'd like to have the Time Range Picker apply to fields that contain dates, but not the _time field. There are two fields specifically, Start Date and End Date, that I would like to work with.   I'd like the Time Range picker to apply to the dates in the Start Date and End Date fields instead of _time. Is there any way to do this for one if not both fields?
I know I can use tokens inside CSS within an XML dashboard to automatically change styles through changing settings through tokens, but this does not seem to work if the CSS is loaded via the stylesh... See more...
I know I can use tokens inside CSS within an XML dashboard to automatically change styles through changing settings through tokens, but this does not seem to work if the CSS is loaded via the stylesheet="xx.css" in the dashboard. Is there any way to create CSS that can be dynamic based on the values of tokens. I'm looking to have user definable colour schemes through colour values defined in config.  
new splunk user i installed my splunk on my windows machine and i want to receive logs and how to find a logon event? in the search index there is only default index=internal and audit, so these lo... See more...
new splunk user i installed my splunk on my windows machine and i want to receive logs and how to find a logon event? in the search index there is only default index=internal and audit, so these logs are the same received login event logs?. Is it detected logon event if the user accesses this windows machine? Do I need to install any third party application to get logs? because splunk forwarder is a remote way to send logs so on local machine how can i do that? i want to check user login event in splunk Example: if user access this windows machine then SIEM splunk job is check logon event log details like if people with valid IP only access this windows machine or not
Hello All, Splunk Enterprise version 8.1 Post a recent server crash, our Splunk instance isn't coming up.  The splunk service isn’t’ starting despite us having gracefully rebooted the server once. ... See more...
Hello All, Splunk Enterprise version 8.1 Post a recent server crash, our Splunk instance isn't coming up.  The splunk service isn’t’ starting despite us having gracefully rebooted the server once. The error is it thinks TCP port 8089 is already occupied by splunk itself despite splunk service not running. Pls see below output.   Even if I force kill the process ID related to 8089/TCP,  the system automatically spawns a new process ID  and shows 8089 as occupied yet again by splunkd.  This is going in an endless loop.  There is nothing in splunkd.log file to indicate this weird behavior. What is making splunk launch a new process automatically despite us force killing the PID ? I have tried https://community.splunk.com/t5/Deployment-Architecture/How-to-resolve-error-quot-ERROR-The-mgmt-port-8089-is-already/m-p/357386  but no luck. As mentioned we even restarted the host.       [svc-splunk@hostname bin]$ ./splunk status splunkd is not running. [svc-splunk@hostname bin]$ ./splunk start Splunk> The Notorious B.I.G. D.A.T.A. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: not available ERROR: mgmt port [8089] - port is already bound. Splunk needs to use this port. root@hostname bin]# netstat -tulpn | grep 8089 tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN 7523/splunkd [root@hostname bin]# kill -9 7523 [root@hostname bin]# netstat -tulpn | grep 8089 tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN 7979/splunkd [root@hostname bin]# kill -9 7979 [root@hostname bin]# netstat -tulpn | grep 8089 tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN 8452/splunkd [svc-splunk@hostname bin]$ ./splunk status splunkd is not running.       Any suggestions ?  If reinstall the only option, then pls suggest how to take backup of this Deployment Server and restore.  This is a Deployment server with over 500+ clients phoning home to it Thanks
I have an alert where i want the below date and time should get displayed in email subject Here alert is getting the data from March 02,2022 8:00pm to March 03,2022 8:00pm Like from  yesterday 8:... See more...
I have an alert where i want the below date and time should get displayed in email subject Here alert is getting the data from March 02,2022 8:00pm to March 03,2022 8:00pm Like from  yesterday 8:00pm to today's 8:00pm data and alert will get triggered everyday at 11pm   I want to get the date and  time like shown below March 02,2022 8:00pm to March 03,2022 8:00pm Thanks in advance           
I have 2 searches from two individual log files with Txid in common (could be outerjoin):  The first search I get the Txid from source file A and get the duration of that transaction. The second se... See more...
I have 2 searches from two individual log files with Txid in common (could be outerjoin):  The first search I get the Txid from source file A and get the duration of that transaction. The second search (I used Drilldown Editor to create a click event -->   Set TxnId=$click.value$) is to retrieve appname, columns from a SQL statement,  host and by the selected Txnid. I'd like to make these two outputs as one result.  How do I do it?  The exact syntaxes I used are as follows: index="IDX"   (host="PRhosts")  source="WS.webapi.log"   "Controller.Post" "- End" | rex field=_raw "s/^.* {/{/" mode=sed  | spath output=status path=stat  |rex field=_raw "\s+T+\s(?<txid>.*?)\s+Controller\\.Post\s\\-\s(?<duration>.*?)\s\\-\s+End" |sort - duration |table txid duration index="IDX"  (host="PRhosts") source="*WS.Business.Milestones.log" |rex field=_raw "s/^.* {/{/" mode=sed |spath output=nv path=flds{}.nv |spath output=status path=stat |spath output=tid path=tid |spath output=fn path=flds{}.fn | search tid=$Txnid$ | table fn nv host status tid WS.Webapi.log raw date looks like one line below (and you can guess there is a - Begin somewhere above but there is no duration recorded): 08/10/22 19:21:18.33 p06712 [00017] T M2kYTm7ywE6RFEnqc9m_1g Controller.Post - 00:00:00:270 - End WS.Business.Milestones.log  raw data look like the following: 08/10/22 19:26:03.44 p08604 [00106] T {"tid":"H2R2JPpkiECRHW5hEszG3Q","sid":"T1-COOLSECURITY:CSAPPAUTH-{E7690AF7-D1F0-4A84-A612-7E47C9F07679}","stat":"Success","sf":"EmployeeLogic","sm":"GetAsync","dt":"2022-08-10T23:26:03.4462133Z","flds":[{"fn":"username","nv":"HostedRedirGlobalEmployeeWS_PR"},{"fn":"dbQueries","nv":"SQL_QUERIES=SELECT emp.EMP_ID, emp.REPORTS_TO_SCID, emp.DEPT_CODE , emp.EMP_ID\n FROM coolemp.SHIPS_COOL2 emp\n WHERE ((UPPER(emp.SYSTEM_PERSON_TYPE) != UPPER('Pending Worker'))) AND ((UPPER(emp.USER_SID) = UPPER(:emp_userSid)))"}]}   So I'd like to know how to join the above 2 results into one so I can show the duration, with fn and nv values that has the SQL field "emp.Last_Updated_Date".
Is there a way to rename subfields based on a condition? Some of our applications log into fields, say message.message.A, message.message.B, etc, and some apps log the same fields in message.message.... See more...
Is there a way to rename subfields based on a condition? Some of our applications log into fields, say message.message.A, message.message.B, etc, and some apps log the same fields in message.message.log.A, message.message.log.B, etc. Currently in my query, if I have to search for both, I use this: index=* NOT message.message.log.A | rename message.message.* AS * | append [search index=* message.message.log.A=* | rename message.message.log.* AS *] <more commands here> Somehow when I use this it doesnt produce the expected number of events: index=* | rename message.message.* AS * | rename log.* AS * <more commands here> There are about 20 of those similarly named subfields that either in message.message.* OR message.message.log.* What is a better (or best) alternative than append?
For some reason there are entries that are not grouped together, but obviously look like they should be. In the following table, 2 rows with serviceTicketId = 00dcfe68-25d8-4c58-9228-5fc8f7ddb9d1 are... See more...
For some reason there are entries that are not grouped together, but obviously look like they should be. In the following table, 2 rows with serviceTicketId = 00dcfe68-25d8-4c58-9228-5fc8f7ddb9d1 are on separate rows, other serviceTicketIds such as 00c093f4fc527e5ff7006566b1a0fd90 have one row, but multiple event times. Here is my query: (index=k8s_main "*Published successfully event=[com.nordstrom.customer.event.OrderLineReturnReceived*") OR (index="k8s_main" cluster="nsk-oak-prod" "namespace"=app04096 "*doPost - RequestId*") OR (index=k8s_main container_name=fraud-single-proxy-listener message="Successfully sent payload to kafka topic=order-events-avro*" contextMap.eventType="OrderLineReturnReceived") | rename contextMap.orderId AS nefiOrderId contextMap.serviceTicketId AS nefiServiceTicketId | rex field=eventKey "\[(?<omsOrderId>.*)\]" | rex field=serviceTicketId "\[(?<omsServiceTicketId>.*)\]" | rex "RequestId:(?<omniServiceTicketId>.*? )" | rex "\"orderNumber\":\"(?<omniOrderId>.*?)\"" | eval appId = mvappend(container_name, app) | eval orderId = mvappend(nefiOrderId, omsOrderId, omniOrderId) | eval serviceTicketId = mvappend(nefiServiceTicketId, omsServiceTicketId, omniServiceTicketId) | stats dc(_time) AS eventCount values(_time) AS eventTime values(appId) AS app BY serviceTicketId orderId | eval timeElapsed = now() - eventTime  
Hello, I have inherited a set of splunk servers, and three are search heads.  Some of the apps on the search heads  are installed directly.  I would prefer to manage those apps from the deployment ... See more...
Hello, I have inherited a set of splunk servers, and three are search heads.  Some of the apps on the search heads  are installed directly.  I would prefer to manage those apps from the deployment server.   I have found lots of information about deploying apps via single-instance/distributed, but not a lot on how to move an app from a single instance to a deployment server. In my brain, the procedure looks like this: On the search head, tar up the app from $SPLUNK/etc/apps/<appname> copy the tar file to the deployment server untar the file into the $SPLUNK/etc/deployment-apps uninstall or disable the original app on the original searchhead Map the new  app to serverclass and to the client system (original search head) Is this correct?  --jason    
Hello, I've encountered a problem while trying to download Splunk Enterprise. I login into my account, and reach this page: https://www.splunk.com/en_us/download/splunk-enterprise.html Press... See more...
Hello, I've encountered a problem while trying to download Splunk Enterprise. I login into my account, and reach this page: https://www.splunk.com/en_us/download/splunk-enterprise.html Press the download button, a little loading circle appears and then a 401 status is received from this URL: https://eula.splunk.com/api/v1/session/callback I also added a screenshot for clarification. What's wrong with my account? I've been using it for the past year without any problems. I've tried multiple browsers and devices. Screenshot:  
Hi,  When creating Dashboard in the new Dashboard studio, I have a lot of Inputs for Filter.   I would like to break Inputs (or group them) in new Line so that Inputs are more easily reviewed.   ... See more...
Hi,  When creating Dashboard in the new Dashboard studio, I have a lot of Inputs for Filter.   I would like to break Inputs (or group them) in new Line so that Inputs are more easily reviewed.   Example: line 1 – inputs for Asset (like Hostname, IP Address, …), line 2 – inputs for example vulnerabilities (cve, base_score, …)  I don’t see any option to move input into new line, this is now very uncomfortable as it is just lots of inputs without clear visibility (and depends on screen size where input will be, first row or second row).   I saw lots of solutions for Classic dashboard, but what about new Dashboard?  Thank you.
I'm having trouble extracting some dates from a date field. Certain assets were provided with a generic date, and I can't seem to extract the date for these events. Sample data: lastsca... See more...
I'm having trouble extracting some dates from a date field. Certain assets were provided with a generic date, and I can't seem to extract the date for these events. Sample data: lastscan newdate 2022-08-10T06:51:33.874Z 2022-08-10 2022-08-10T00:06:19.920Z 2022-08-10 1969-12-31T23:59:59.999Z     SPL: | eval newdate=strptime(lastscan,"%Y-%m-%d") | eval newdate=strftime(newdate,"%Y-%m-%d") As you can see, the events with the 1969 date are not extracting as expected and I'm getting no results for the "newdate" field.  Any thoughts on how I can extract the date from the 1969 events?
I have a particular source/sourcetype ; is there a way to know (through SPL) to get the name of the forwarder from which this particular source feed is coming?
Hopefully I can explain this so it's not too confusing and I'm not overcomplicating things....  I'm currently setting a token based on a particular click value, which is used to drive other charts in... See more...
Hopefully I can explain this so it's not too confusing and I'm not overcomplicating things....  I'm currently setting a token based on a particular click value, which is used to drive other charts in the dashboard.  I'm looking to expand upon that and lookup a second token based on that token by appending something like _IntValue to the token name. Here's an example: 1) I first set static token int values in the dashboard based on what values will appear as in the first chart.  The first chart will have click values of "Sample Click" and "Sample 2 Click".  I want to manually say that the IntValues for those are 20 and 60. <set token="Sample Click Value_IntValue">20</set> <set token="Sample 2 Click Value_IntValue">60</set> 2) When I click on the dashboard and the value is "Sample Click", I want to be able to use "Sample Click_IntValue" as a token in another chart.   The real-life scenario is certain click values will already be calculating total initiations over a period of time.  Each one of those initiations takes X hours to complete.  I want to detail how one particular task that takes 20 hours each attempt that was run 20 times over a period of time equates to 400 hours over that period of time.  And when another task is clicked that takes 60 hours to complete and was run 10 times over a period of time equates to 600 hours. Thanks in advance!!!!  
My customers certificates expired and they followed the procedures for submitting and requesting a third party certificate.  The CA returned a CA certificate that was already combined. So the custome... See more...
My customers certificates expired and they followed the procedures for submitting and requesting a third party certificate.  The CA returned a CA certificate that was already combined. So the customer did not have to combine their certificates. When trying to start splunk, it will not start. When comparing all the certificates from previous ones, one thing we noticed was the private key had a heading "--BEGIN RSA PRIVATE KEY -- ", instead of "--BEGIN PRIVATE KEY--" and two new lines after, stating "Proc-Type" and "DEK-Info".  The customer is on Splunk v8.2.7, Windows 64bit.  The keys are DoD CA60 I am wondering if the private key is not in the correct format.  Should the customer re-submit a request to generate a new key from the CA?
We are trying to standardize our nomenclature on indexes. Is it possible to rename an index along with moving data from the old index to new index name? Example: index "fit_azure" need to change t... See more...
We are trying to standardize our nomenclature on indexes. Is it possible to rename an index along with moving data from the old index to new index name? Example: index "fit_azure" need to change to "top-azure" Are there things to consider that I'm probably not keeping in mind. My concerns doing this are: Would renaming the index require re-ingesting of data? If so, what about when ingesting data into an index can sometimes delete the log on the on the system inputting into splunk? How would this impact storage, hot/warm/cold and all that? - would previous storage be inaccessible to the new index naming? Would permissions on viewing the index change?