All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are attempting to ingest csv files from two different applications where the file name and structure are identical. The files are placed into two different file directories on a heavy forw... See more...
Hello, We are attempting to ingest csv files from two different applications where the file name and structure are identical. The files are placed into two different file directories on a heavy forwarder and they contain different sets of data. The challenge we are having is the file being sent to the prod directory is being ingested, but the file in the dev directory is not or  neither file gets ingested.  Since the file names are identical and delivered to each directory at the same time, I'm thinking this is causing issues with one or both files not being ingested. Below is how we configured our Props config and it does seem to work, but not consistently.  Any Help would be appreciated!     #Production [batch://C:\Import\Prod\*.csv] index = test sourcetype = test move_policy = sinkhole #Development [batch://C:\Import\Dev\*.csv] index = testb sourcetype = testb move_policy = sinkhole      
my splunk version is 7.3 But Splunk left Russia and is not supported here. if i upgrade to version 8. that is, it is likely that the company’s splunk will be blocked. We have an official license. Is ... See more...
my splunk version is 7.3 But Splunk left Russia and is not supported here. if i upgrade to version 8. that is, it is likely that the company’s splunk will be blocked. We have an official license. Is it possible to update the splunk so that my splank does not go to the central office and is not blocked.  
Auditors are looking for updated AOC for Splunk. Where can we find this document from Splunk?
As a deep dive into my data sources / data integrity. I need to learn what SPLs /Apps needs to be used for this purpose. I appreciate your help.
I was getting SSL error due to a self signed certificate on port 8089. This certificate has been replaced with a DigiCert signed certificate. With updated certificate I am able to connect to Splunk A... See more...
I was getting SSL error due to a self signed certificate on port 8089. This certificate has been replaced with a DigiCert signed certificate. With updated certificate I am able to connect to Splunk API on 8089 from my local desktop. However, when I am still getting SSL error when connecting from our application server. I validated that ports 8089 and 443 is open from app server. I get to Splunk on port 443 from app server but when trying to connect on port 8089, getting SSL error.  Please help me what could be causing this and how to resolve this error.
Guys, how are you doing ? I have a challenge to get some data coming from some searchs on a dashboard in splunk, put it into a table i think in dbconnect and share thsi data to AWS Redshift cluste... See more...
Guys, how are you doing ? I have a challenge to get some data coming from some searchs on a dashboard in splunk, put it into a table i think in dbconnect and share thsi data to AWS Redshift cluster. Do you know how i can do that ? I am trying to found something in internet, but any path, solution, idea, or documentation is gone be usefull ! Thanks in advance.
I've recently installed an add on in my dev instance which created various fields, including user and NormalizedUser. I have a one-time csv file that has a list of users I need imported into one or b... See more...
I've recently installed an add on in my dev instance which created various fields, including user and NormalizedUser. I have a one-time csv file that has a list of users I need imported into one or both of those existing fields. Is this possible? It doesn't seem to do so and I prefer not to have to search against multiple fields (I'd like to do a query against the Add On's index for user or NormalizedUser to only retrieve the entire list of users or NormalizedUser's.)  Currently it seems to input the csv data into some other field name and I don't even know where it's grabbing that field name from.  The field header on the csv column is NormalizedUser Any help appreciated
Hi Team Splunk is unable to read a file which has particular content as below.  If the file contains other content, then splunk is able to get the content. Not sure what is wrong with this content.... See more...
Hi Team Splunk is unable to read a file which has particular content as below.  If the file contains other content, then splunk is able to get the content. Not sure what is wrong with this content. if the reorder the lines, then also it is able to recognize. PFB file content.  ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|TOD Ref. Type [IC] not set for scheme [MMSAA]|| ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|There is no transaction for the combination [02-08-2021 00:00:00] and [   M12345]. The posting of transaction failed. Transaction ID: [  M12345]|| ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|The posting failed.|| Error for this file in splunk logs ERROR TailReader - File will not be read, seekptr checksum did not match (file=<FullPath_of_file_with file name>). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at <website of splunk> for more info.
Hi,  I am trying to export PDF in Splunk Security Essential App --> Analytics Advisor --> Mitre ATT&CK Framework --> Export PDF But the data in PDF is not in a proper format. Please help here. PDF... See more...
Hi,  I am trying to export PDF in Splunk Security Essential App --> Analytics Advisor --> Mitre ATT&CK Framework --> Export PDF But the data in PDF is not in a proper format. Please help here. PDF Screenshot Dashboard Screenshot      
I am trying to tell where to look for time stamp & make sure time is current & are synched across my Splunk & ES environment. I appreciate your time & response in advance.
Hello, I have seen multiple posts related to large lookup files delaying the replication in a distributed environment. In my case I have a lookup table of around 120MB that is used on an automatic ... See more...
Hello, I have seen multiple posts related to large lookup files delaying the replication in a distributed environment. In my case I have a lookup table of around 120MB that is used on an automatic lookup table so it has to be replicated to the search peers. The lookup table file is static and rarely changed. My questions are: - Once the replication bundle syncs successfully, will Splunk SH try to replicate it again to peers if there is no change has been found? - If file changes only by few lines/records, will Splunk try to replicate the delta from the previous state? Bandwidth is limited so I don't want to have a bottleneck during operations. Thank you in advance for your time. With kind regards. Chris
I set up a sample VM for myself to test out Splunk configuration. I wanted a stand-alone service just to make sure I can get my basic configuration running and forward logs from a Kubernetes instance... See more...
I set up a sample VM for myself to test out Splunk configuration. I wanted a stand-alone service just to make sure I can get my basic configuration running and forward logs from a Kubernetes instance. However, I am stuck in verification of the event receive resource. Here's the steps I followed: Setup a Linux VM Get Splunk installed Confirm web is working as expected Create an index called splunk_test_events that is of (Type: events, App: search) Go to Settings > Forwarding and Receiving and set up a port for 9997 In Settings > Data Inputs set up an HTTP Event Collector (details below) Ensure tokens are enabled (I forget where this was) Restart Splunk SSH into the machine and check the running ports (see below) Attempt to curl and event So the HTTP Event Collector I set up as: Name splunk_testing_events Source Type Entered Source Type Selected Allowed Indexes splunk_test_events Default Index splunk_test_events Output Group None Enable Indexer Acknowledgement On    I verified that the HTTP Event Collector is enabled. I log into the machine and check the ports that are active: $ sudo lsof -i -P -n | grep LISTEN systemd-r 649 systemd-resolve 13u IPv4 23727 0t0 TCP 127.0.0.53:53 (LISTEN) sshd 751 root 3u IPv4 26648 0t0 TCP *:22 (LISTEN) sshd 751 root 4u IPv6 26650 0t0 TCP *:22 (LISTEN) splunkd 6405 root 4u IPv4 63003 0t0 TCP *:8089 (LISTEN) splunkd 6405 root 60u IPv4 63818 0t0 TCP *:9997 (LISTEN) splunkd 6405 root 128u IPv4 123397 0t0 TCP *:8088 (LISTEN) splunkd 6405 root 156u IPv4 64895 0t0 TCP *:8000 (LISTEN) mongod 6482 root 10u IPv4 61364 0t0 TCP *:8191 (LISTEN) python3.7 6623 root 7u IPv4 63884 0t0 TCP 127.0.0.1:8065 (LISTEN)   Now I try and send a curl event over: curl -v -k -H "Authorization: Splunk GENERATED_HEC_TOKEN" http://VM_PUBLIC_IP:9997/services/collector/event -d '{ "event": "testing manually" }'    I get back an error: * Trying VM_PUBLIC_IP:9997... * Connected to VM_PUBLIC_IP (VM_PUBLIC_IP) port 9997 (#0) > POST /services/collector/event HTTP/1.1 > Host: VM_PUBLIC_IP:9997 > User-Agent: curl/7.74.0 > Accept: */* > Authorization: Splunk GENERATED_HEC_TOKEN > Content-Length: 31 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 31 out of 31 bytes * Empty reply from server * Connection #0 to host VM_PUBLIC_IP left intact curl: (52) Empty reply from server   I tried some of the other ports: 8088: Connection reset by peer 8089: Connection reset by peer 8000: HTTP/1.1 303 (which I expected in this case) What am I doing wrong here? 
I see the following errors when running a search against data in a vix. We have recently upgraded to 8.1.3 when I assume the thirdparty jar files changed from 1.10 to 1.19. I think there is som... See more...
I see the following errors when running a search against data in a vix. We have recently upgraded to 8.1.3 when I assume the thirdparty jar files changed from 1.10 to 1.19. I think there is some config that is pointing to the old 1.10.jar file I have looked in indexes.conf for the vix configuration which references the path  changed it to the new version of commons-compress-1.19.jar (see below) and deployed it to the SHC, however it does not seem to make any difference.  Can anyone help?
I need your help to backup the entire set of the .conf files in Splunk Ent. & ES separately please. Can this backup be scheduled? Is scheduled back up here recommended? Thanks a million in advance.
Hi there! Please allow me to admit, I'm newbie to splunk + sigma  rules for detection. In my test environment, I have imported windows sysmon event logs. I understand that using sigmac, I can creat... See more...
Hi there! Please allow me to admit, I'm newbie to splunk + sigma  rules for detection. In my test environment, I have imported windows sysmon event logs. I understand that using sigmac, I can create rules for splunk. My Q is how would I use those sigma rules for use with splunk for detection ?  My understanding is that when I ingest new logs, splunk would auto run those rules against newly ingested logs ? Thank you
I am ingesting a text file and I have created a field called Flag. I am looking to create a filter which only shows me events where the first two characters of that field are in capitals.   I.e. I ... See more...
I am ingesting a text file and I have created a field called Flag. I am looking to create a filter which only shows me events where the first two characters of that field are in capitals.   I.e. I want to see event where Flag is VMs, SVictor, ARev but not Amy, Fox or Dana.   Can you help?
Hello everyone. I'm getting Forced bundle replication failed. Reverting to old behavior - using most recent bundles on all on a search head, and I'm not sure how to fix this. I excluded heavy files f... See more...
Hello everyone. I'm getting Forced bundle replication failed. Reverting to old behavior - using most recent bundles on all on a search head, and I'm not sure how to fix this. I excluded heavy files from the bundle, also restarted the search head, but nothing changes. Where should I dig? I wasn't able to find this error message in Splunk documentation and on the internet. The closest topic on Splunk answers was related to search head clustering, but since I wasn't setting up SH clustering, I guess it's not applicable. Additional info. Before the issue occurred, I've noticed that disk usage on indexers went to 100%. I solved it by deleting data from /opt/splunk/var/run/searchpeers (except the latest files). My environment: - 4 indexer VMs. - 2 search head VMs (not clustered, just testing Splunk 7 and Splunk 8 in parallel). 4 indexers are connected as distributed search peers to each of those search heads. - No deployment server in use. Sometimes network connection is not good between indexers and search head, so maybe it contributes somehow. Any suggestions and ideas appreciated.  
Hi, I'am trying to map alerts for mitre_technique_id from one of my APIs, and I see a strange behaviour from splunk CIM pie chart where in it says "Your search returned no results". Although, I can ... See more...
Hi, I'am trying to map alerts for mitre_technique_id from one of my APIs, and I see a strange behaviour from splunk CIM pie chart where in it says "Your search returned no results". Although, I can see the mapped values dumped inside the splunk base while performing a search query. The data is dumped as expected but not being populated on the pie chart, giving the error message as in the picture below.   Please reply or comment if any known resolutions. Thank you!
we have indexer , search head and heavy forwarder in a vessel , the heavy forwarder will send the data to a head office , but due to the vessel is moving in international wate... See more...
we have indexer , search head and heavy forwarder in a vessel , the heavy forwarder will send the data to a head office , but due to the vessel is moving in international water or far from the head office , the head office indexers disconnected from the vessel , we know the heavy forwarder buffer the data until the indexers became available again , but the buffer is in memory(RAM) , and the buffered data will be very large -as the vessel disconnected long time -so the memory may be full and heavy forwarder will crash , now my question, do we can make the heavy forwarder buffer the data on the hard disk not on the memory ,or any other solution to this case ?  
Dear, Kindly please help with creating an official support account with case opening privileges. Best Regards