All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

we are currently exploring splunkjs for rendering data in our custom app. we are able to authenticate and display charts based on searches directly from webapp but having difficulty in integrating wi... See more...
we are currently exploring splunkjs for rendering data in our custom app. we are able to authenticate and display charts based on searches directly from webapp but having difficulty in integrating with react app as its not component based. we saw the new Splunk ui/dashboard studio with many react components e.g. splunk/react-ui ,splunk-visualizations we think we can use this react components in our external webapp butwe are not able to see any authentication mechanism in these react components. how can we use these react components in external app which goes against splunk enterprise does authentication fires searches and displays charts. Thanks in advance.  
Hi, I getting the following error when start the container using the command, any idea?         unday 08 August 2021 14:19:09 +0000 (0:00:00.050) 0:05:37.573 ********* TASK [splunk_standa... See more...
Hi, I getting the following error when start the container using the command, any idea?         unday 08 August 2021 14:19:09 +0000 (0:00:00.050) 0:05:37.573 ********* TASK [splunk_standalone : Setup global HEC] ************************************ fatal: [localhost]: FAILED! => { "cache_control": "private", "changed": false, "connection": "Close", "content_length": "130", "content_type": "text/xml; charset=UTF-8", "date": "Sun, 08 Aug 2021 14:19:11 GMT", "elapsed": 0, "redirected": false, "server": "Splunkd", "status": 401, "url": "https://127.0.0.1:8089/services/data/inputs/http/http", "vary": "Cookie, Authorization", "www_authenticate": "Basic realm=\"/splunk\"", "x_content_type_options": "nosniff", "x_frame_options": "SAMEORIGIN" } MSG: Status code was 401 and not [200]: HTTP Error 401: Unauthorized PLAY RECAP ********************************************************************* localhost : ok=56 changed=2 unreachable=0 failed=1 skipped=58 rescued=0 ignored=0 Sunday 08 August 2021 14:19:11 +0000 (0:00:02.151) 0:05:39.725 ********* =============================================================================== splunk_common : Get Splunk status ------------------------------------- 233.48s splunk_common : Start Splunk via CLI ----------------------------------- 48.29s splunk_common : Update Splunk directory owner -------------------------- 20.43s splunk_common : Wait for splunkd management port ----------------------- 10.10s splunk_common : Test basic https endpoint ------------------------------- 4.14s Gathering Facts --------------------------------------------------------- 3.16s splunk_common : Cleanup Splunk runtime files ---------------------------- 2.49s splunk_standalone : Setup global HEC ------------------------------------ 2.15s splunk_common : Check if /sbin/updateetc.sh exists ---------------------- 1.40s splunk_common : Check for scloud ---------------------------------------- 1.38s splunk_common : Start Splunk via service -------------------------------- 1.28s splunk_common : Update /opt/splunk/etc ---------------------------------- 0.90s splunk_common : Find manifests ------------------------------------------ 0.68s splunk_common : include_tasks ------------------------------------------- 0.49s splunk_common : include_tasks ------------------------------------------- 0.46s splunk_common : Remove user-seed.conf ----------------------------------- 0.43s splunk_common : Enable splunktcp input ---------------------------------- 0.39s splunk_common : Check for existing installation ------------------------- 0.38s splunk_common : Ensure license path ------------------------------------- 0.36s splunk_common : Create .ui_login ---------------------------------------- 0.30s # docker run --name splunk-mount -v opt-splunk-etc:/opt/splunk/etc -v opt-splunk-var:/opt/splunk/var -d -p 8000:8000 -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=password splunk/splunk:latest          
I have Drilldown that show me some Test and this is Onclick:   index=main |where Test="$click.value$"   The problem is when $click.value$ contains  Double quote (")   And then I got an error in... See more...
I have Drilldown that show me some Test and this is Onclick:   index=main |where Test="$click.value$"   The problem is when $click.value$ contains  Double quote (")   And then I got an error in Search screen "Unbalanced quotes"   How to fix it ?
Hi Community , We have integrated our itsi cluster to servicenow and tickets are creating fine.  but recently observed a strange behavior from splunk itsi  that . episodes generated in episode revie... See more...
Hi Community , We have integrated our itsi cluster to servicenow and tickets are creating fine.  but recently observed a strange behavior from splunk itsi  that . episodes generated in episode review will create servicenow incident . once issue resolves episode will get resolved .   But when the same issue happens with same node  , resolved episode count gets increased , instead of creating new notable event and a new episode. itsi logs  doesnot provide much details about this , please help check why .   Best regards Vinay vi323056@wipro.com
Using the Splunk SDK, I am ingesting json data into a splunk index via this line of code:  index.submit(event, host="localhost", sourcetype="covid_vacc_data_ingest") This line of code is working an... See more...
Using the Splunk SDK, I am ingesting json data into a splunk index via this line of code:  index.submit(event, host="localhost", sourcetype="covid_vacc_data_ingest") This line of code is working and data is ingested, but the timestamp is always the ingestion time rather then the date field on the event. Here is a screenshot of my settings in Splunk enterprise for this sourcetype:  Here is a screenshot of what the ingested data looks like:  I want the _time field on the left to be the date field on the right. Any suggestions? Not sure what I am doing wrong. Thank you! 
I understand the "Classic Experience". I understand the new k8s based "Victoria Experience". But my Cloud instances claims to be part of the "Niagara Experience", and I cannot get anyone at splunk- i... See more...
I understand the "Classic Experience". I understand the new k8s based "Victoria Experience". But my Cloud instances claims to be part of the "Niagara Experience", and I cannot get anyone at splunk- including support and including my inside sales rep- to tell me what that is. Bueller? What is "Niagara"? Is the underlying platform k8s as it is with "Victoria"? What customer-facing differences are there between Classic, Victoria, and Niagara? Why isn't Niagara listed on https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/Admin/Experience?
Hi everyone, I'd like to know if it is possible to have a following example dashboard with a single table panel: For example: column1: src_ip column2: dest_ip column3: MB_downloaded So, this ... See more...
Hi everyone, I'd like to know if it is possible to have a following example dashboard with a single table panel: For example: column1: src_ip column2: dest_ip column3: MB_downloaded So, this is simple, but what I'd like to do, is being able to treat each line and be able to trace what happened. I'd like to do it with 2 additional colums: one with a checkbox: has to be checked if the subject (described in the row) has been acknowledged by the analyst. If the row is ACKed, then it becomes green. Else, it stays red. one with a comment section:  analysis of the row. (example: "John downloaded 10 Mo from google.com, he downloaded a .xlsx file named test.xlsx") Also, is there a way to keep trace of what was acknowledged ? Maybe export every row checked in a lookup ? I guess this needs .js and .css files ? Or can it be done with a simple xml dashboard ?   Thank you in advance !
Hi we are planning to implement a Splunk in our environment, so we need a demo session on APM, RUM and end to end user monitoring
Hello All, I have 3 indexer in cluster and data is being stored in the NAS server. and for one server data is stored in cold logs on a mounted storage. I have copied the data from NAS to 2 serv... See more...
Hello All, I have 3 indexer in cluster and data is being stored in the NAS server. and for one server data is stored in cold logs on a mounted storage. I have copied the data from NAS to 2 server , the one with mounted point is giving me a duplicate error and I am not able to see data copied in the /opt/splunk/var/lib/splunk/accessdb/thaweddb/ is marked as diabled due to conflict. I have tried multiple commands to rebuild the Splunk db in all the indexers. and I am getting error as attached screenshots @ivanreis @lmyrefelt @kmorris_splunk @Masa @jkat54 @493669 @mayurr98 
I'm using HTTP collector on my free trial cloud instance. URLs I tried:  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector/event/1.0  https://inputs.<MY_SPLUNK_INSTA... See more...
I'm using HTTP collector on my free trial cloud instance. URLs I tried:  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector/event/1.0  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector/event  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector Payloads I tried: 1)  {time: -3730851658780559,event: { event: 'test', message: 'localhost event', myts: 1628340011441 }} 2) '{"time":"1628340065.594","event":{"message":"localhost event","severity":"info"}}' Responses I'd get: { text: 'Success', code: 0 } Then, I tried these search queries into my Splunk search app, and I get 0 events: - event.message=* - event=* What is happening?
How to improve Splunk Deployment server scalability?
Hi Splunk experts, I have below usecase and using below query     index=Index1 app_name IN ("customer","contact") | rex field=msg.message.details "\"accountUuid\":\"(?<SFaccountUUID>[^\n\r\"]+)"... See more...
Hi Splunk experts, I have below usecase and using below query     index=Index1 app_name IN ("customer","contact") | rex field=msg.message.details "\"accountUuid\":\"(?<SFaccountUUID>[^\n\r\"]+)" | rex field=msg.message.details "\"contactId\":\"(?<SFcontactUUID>[^\n\r\"]+)" |rex field=msg.details "\'customerCode\'\=\'(?<cac>[^\n\r\']{10})\'" | rename msg.correlationId AS correlationId | stats latest(SFcontactUUID) as contactUUID,latest(SFaccountUUID) as accountUUID,values(msg.tag.Status) as QStatus,values(msg.tag.errorMessage) as Q_errorMessage,values(msg.tag.errorCode) as Q_errorCode by correlationId | join type=left correlationId [search index=index2 app_name="contact1" |rename msg.message.header.correlationId AS correlationId |stats values(msg.message.header.Status) AS DStatus,values(msg.message.header.eventName) AS eventName,values(msg.message.header.errorMessage) as D_errorMessage,values(msg.message.header.errorCode) as D_errorCode by correlationId] The common identifier between the 2 searches is the correlationId. Below is sample result   correlationId contactUUID accountUUID Q_errorMessage Q_errorCode D_errorCode D_errorMessage ab861125-6cd7-493b-999f-ef9b2edd8315023758601   C0DABCC1-EFC8-11eb-A67A-005056B89B42 null null 201 null null { "ContactUUID": "b020c98a-43f5-d6b3-e983-45ffddf52a73"} Is it possible to coalesce the value of highlighted in red from subsearch into the ContactUUID field in the outersearch?I am expecting this value either in outer or subsearch and so how can I solve it? 
Hi, I have an excel with 11000 records with 5 columns  (Username, Unique UserId, ** , **, **).  I need to prepare a report on users login(Loginservice call) for the past 15 days.  Have to search in... See more...
Hi, I have an excel with 11000 records with 5 columns  (Username, Unique UserId, ** , **, **).  I need to prepare a report on users login(Loginservice call) for the past 15 days.  Have to search in Splunk based on UserId and need to retrieve all the events matching with the userId's. Can someone help me with an effective solution ?              
Dear All, I want to integrate Imperva SecureSphere WAF with SPlunk. I have installed the add-on to my Search head, its shown in my SH and HF /etc/app directory.. Now please let me know how to integ... See more...
Dear All, I want to integrate Imperva SecureSphere WAF with SPlunk. I have installed the add-on to my Search head, its shown in my SH and HF /etc/app directory.. Now please let me know how to integrate imperva logs to splunk.
I'm trying to build a search that will return an event and the severity of that event. I have the events with wildcards for parts that might change and severity in a lookup. Here's an example from... See more...
I'm trying to build a search that will return an event and the severity of that event. I have the events with wildcards for parts that might change and severity in a lookup. Here's an example from my lookup Message,Severity *kernel: nfs: server * OK,normal *kernel: nfs: server * not responding* still trying,critical If I run this search I get back the results I'd like, but have no way of referencing this back to the lookup to grab severity because the Message doesn't match whats in the lookup due to the wildcards index=os source=/var/log/messages host=linuxserver1p | rex field=_raw "(?<Message>.*)" | search [inputlookup mylookup.csv | table Message] Jul 28 02:15:40 linuxserverp kernel: nfs: server fixdist OK Jul 28 01:30:37 linuxserver1p kernel: nfs: server fixdist not responding, still trying How can I take these results back to my lookup and be able to pull severity out? Here is another search I've tried where I have both the results I want and the values from the lookup and I just need to join them together somehow, but as far as I can tell the join won't work with wildcards |inputlookup mylookup.csv |rename Message as msg | append[search index=os source=/var/log/messages host=linuxserver1p | rex field=_raw "(?<Message>.*)" | search [inputlookup mylookup.csv | table Message]] |table Message msg Severity    
I am following along the Splunk docs to self sign a cert for my Splunk UI. Every thing is going fine until I get to the command  /opt/splunk/bin/splunk cmd openssl x509 -req -in mySplunkWebCert.c... See more...
I am following along the Splunk docs to self sign a cert for my Splunk UI. Every thing is going fine until I get to the command  /opt/splunk/bin/splunk cmd openssl x509 -req -in mySplunkWebCert.csr -CA myCACertificate.pem -CAkey myCAPrivateKey.key -CAcreateserial -out mySplunkWebCert.pem -days 1095 I am getting "mySplunkWebCert.csr: No such file or directory" Unfortunately I'm not familiar enough with certs to trouble shoot this. Any help appreciated.
Where do I find a new API for Splunk ES called MITRE ATTACK? The app is not working. The error I get is "Correct API key for MITRE attack". 
Just want to confirm this behavior for a scripted input: The script I want to call, actually runs perpetually... it calls an API, outputs the data, waits 120 seconds, calls the API again, wash rinse... See more...
Just want to confirm this behavior for a scripted input: The script I want to call, actually runs perpetually... it calls an API, outputs the data, waits 120 seconds, calls the API again, wash rinse repeat. What I'm wondering... can I schedule this script, say every 5minutes, and will the scheduler go "script is still running... not starting again"  ... if for whatever reason it stopped, it will just start it again? Or will it just keep starting multiple instances of the script if it's still running?  
Hello, I have a test environment and the SHC members aren't allocated the recommended resources (because it's test) however i haven't had any issues with the environment until recent. For whatever r... See more...
Hello, I have a test environment and the SHC members aren't allocated the recommended resources (because it's test) however i haven't had any issues with the environment until recent. For whatever reason in my test environment the 3 node SHC members keep getting shut down because of signal 9 (the server itself is killing the splunk process) Signal 9 is a KILL signal from an external process. The server is running out of memory, and thats the cause for the kill If i restart the SHC members the resources are freed but the spiral starts over once again. screenshot that shows the decline, something is eating away at it.     When i run the top command on the searcheads and press e to change the unit i can see it's splunk mongod that's taking up most of the mem so far. I also will have replication issue every now and again, where i have to resyc.
I am having issues with finding a way to export two reports. I have two reports, which I'll call search1 and search2. Both searches were run, then ran in the background. According to the jobs tab, b... See more...
I am having issues with finding a way to export two reports. I have two reports, which I'll call search1 and search2. Both searches were run, then ran in the background. According to the jobs tab, both searches completed. The customer wanted this search run for "all-time" and thus is quite large. Search1 is 9.22GB and Search2 is 4.97GB. The issue is getting access to the logs. I've tried using | loadjob sid, and it just hangs and fails. I've tried exporting from the jobs tab, and it fails. I can't use the api, because from what I can tell, you must put the password into the search, when then makes the password searchable for anyone with access to that log. I went to the $SPLUNK_HOME/var/run/splunk/dispatch folder and found both jobs where this link, https://docs.splunk.com/Documentation/Splunk/8.2.1/Troubleshooting/CommandlinetoolsforusewithSupport#toCsv, says to run "splunk cmd splunkd toCsv ./results.srs.gz". the .gz file appears to now be .zst, but I ran the command. Search1 after a while simply said "killed". Search2 as I'm writing this appears to be working, as it appears comma delimited text is scrolling on the console. I assume that once changed, I will be able to export this one. So how do I export Search1 and other large files in the future? The toCsv command was the last thing I found to try. Perhaps there is a setting in a .conf file I can modify and then run something else? Any assistance is appreciated.