All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on h... See more...
Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on hourly updates for this, but that part is much less of a problem. I do not have access to summary indexing here, so don't bother mentioning it! It so happens that the admins of this site have enabled tsidx reduction so I'm seeing this warning:   Search on most recent data has completed. Expect slower search speeds as we search the reduced buckets.   Not in itself a huge problem as in the search box I eventually see all the available results. HOWEVER... When I run this search into outputlookup, it doesn't seem to wait for all the results to render and goes off and writes the lookup table without (very annoyingly) waiting for one particular column to appear. Is this a bug? Has anyone else seen this? Any way around it? It seems to be entirely consistent and I want to stop hammering the splunk instance with this very expensive search! Cheers, Charles
Hello All, I have set up the Splunk Add-On and Splunk App for Unix and Linux. Data is flowing properly however I am having an issue with alerts. I am trying to set up alerts for various things to s... See more...
Hello All, I have set up the Splunk Add-On and Splunk App for Unix and Linux. Data is flowing properly however I am having an issue with alerts. I am trying to set up alerts for various things to slack. I have the first alert on memory working. I set it to 1 min real-time and it seems to work just fine. This is the working query:    `os_index` source=vmstat | where max(memUsedPct) > 90 | stats max(memUsedPct) by host   However, when I try to do the same for disk, it does not work. I have tried expanding to 5min and 30min real-time windows but the only way I get data to show up in this query is by removing the where clause. I also tried using something like latest() instead of max() but that didn't help. What am I doing wrong here?   `os_index` source=df | where max(UsePct) > 10 | stats max(UsePct) by host   Thank you, jackjack
I'm trying to use a link list dropdown to fill in URL's of specific csv files from the splunk lookup editor app into a dashboard panel.  I've followed the instructions to re-enable iframes in splunk ... See more...
I'm trying to use a link list dropdown to fill in URL's of specific csv files from the splunk lookup editor app into a dashboard panel.  I've followed the instructions to re-enable iframes in splunk 8.0x+ and the frame itself forms, but its stuck saying "loading"  After doing some web browser debugging, it appears this is because the lookup editor app is calling javascript and there is an HTML tag that sets class="no-js" but I'm not a very good HTML debugger and I can't tell if this is being done in the parent apps css or in the child apps css.  I think however, if I could get the iframe to support the javascript I should be in good shape.  Its all the same splunk instance, but I haven't had any luck yet.  Any help is greatly appreciated!
I have the following log !!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory... See more...
I have the following log !!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory.com' port='443' count='1' !!! --- HUB 002329bvpc123cw.branches.sbicdirectory.com:443 is unavailable --- !!! host='005558bvpc5ce4w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 005558bvpc5ce4w.za.sbicdirectory.com:443 is unavailable --- !!! host='41360jnbpbb758w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 41360jnbpbb758w.za.sbicdirectory.com:443 is unavailable --- !!! host='48149jnbpbb041w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 48149jnbpbb041w.za.sbicdirectory.com:443 is unavailable --- !!! user='pips_lvl_one_user' password='pips_lvl_one' quota='user'   I have above log and I'm struggling to extract the colored items ctxsdc1cvdi013.za.sbicdirectory.com = as workstation ID is unavailable = as  status molefe = as quota
I am trying to control ingest rate into Splunk Cloud. I have some firewalls that are very chatty. The firewalls themselves can only point to a single Syslog destination.  For security and complian... See more...
I am trying to control ingest rate into Splunk Cloud. I have some firewalls that are very chatty. The firewalls themselves can only point to a single Syslog destination.  For security and compliance issues, I need to retain and store ALL logs for one year. We have an appliance that forwards to our SOC and it basically has unlimited storage. For reporting and alerting, I need to send most messages into Splunk Cloud. Logging is controlled by ACL and in the syslog messages, you see ACLs. Based on how my firewall is configured, there are a few ACLs that are chattier than others, for example, the implicit deny ACL is CONSTANTLY chatting. The only time I really need to see this ACL in Splunk logs, is when I am troubleshooting however the SOC wants to see this ACL all the time. The implicit deny rule accounts for about 30% of all syslog data generated. Ideally I when I write to disk on the Syslog-NG server, I would like to drop the implicit deny logs so that when the Universal Forwarder reads the log, it won't be sending that unneeded 30% overhead (the implicit deny rule accounts for about 20-50 gigs of ingest a day alone).  My initial log_path statement looks like the following:       log { source(s_udp514); filter(f_device); destination(d_socappliance); destination(d_disk); flags(final); };         I then tried 2 different log path statements to try and separate the traffic so that I can apply the message drop filter:       filter f_device { ( host("192.168.1.1") or host("fqdn.device.com") ) }; filter f_device_msgdrop { ( not match("aclID=0" value(MESSAGE)); ) }; log { source(s_udp514); filter(f_device); destination(d_socappliance); flags(final); }; log { source(s_udp514); filter(f_device);filter(f_device_msgdrop); destination(d_disk); flags(final); };         aclID=0 is the ACL ID of the implicit deny rule. The concept here is that if the string "aclID=0" exists in the syslog message, I don't want to write it to disk and therefore the Universal Forwarder never sees in in the log file and it doesn't get sent to the cloud.  When I use the method above, I end up disabling logging to disk. I haven't verified if logging to the SOC appliance stops as well. Any thoughts on how to tackle this? 
We have developed a Splunk client app using Splunk Java SDK. A particular Splunk server installation has a few indexes with data stored in them. When getIndexes() is called in com.splunk.Service cl... See more...
We have developed a Splunk client app using Splunk Java SDK. A particular Splunk server installation has a few indexes with data stored in them. When getIndexes() is called in com.splunk.Service class object, it returns an empty CollectionIndexes. However, on the same Service object, we can search data in the indexes and it is certain that those indexes do exist. IndexCollection indexCollection = service.getIndexes(); boolean indexNotFound = indexCollection.isEmpty(); I got the same results with Java SDK 1.5.0.0 and 1.6.5.0. This is happening in a particular Splunk server installation. For other Splunk server installations, we do not have any issue. Under what condition could this issue happen? What can I do to troubleshoot?
I am building a search that will based on a table of products with different versions. I need to run an initial search that will return the version with most hosts ("Mainstream") and use that version... See more...
I am building a search that will based on a table of products with different versions. I need to run an initial search that will return the version with most hosts ("Mainstream") and use that version to compare everything else against in order to determine if its less than/greater than (older/newer). I am currently using a foreach command to send each category of product to a subsearch which then grabs mainstream and return it so I can compare each event's version to this mainstream version. I am having extreme difficulty passing the field to the subsearch and filtering out the category by using something like a Where command without setting off confusing errors that don't really make any sense. ("Eval command malformed"). The logic of the query works when I am not using a '<<field>>' token but soon as I try pass a token with a where command within subsearch, it falls apart. I am a Splunk newbie so maybe I am missing something obvious, please advise:   | inputlookup Lookup_Table.csv | eval Category = OSType. "-" .ProductName | stats values(ProductVersion) AS Version values(LifeCycleStatus) AS Status by Category | foreach Category [eval newLifecycleStatus=case(Version< [| inputlookup Lookup_Table.csv.csv | eval Category = OSType. "-" .ProductName | where Category =='<<FIELD>>' | sort -product_count | head 1 | eval Version="\"".ProductVersion."\"" | return $Version], "Declining")]    I changed this code to something like this with no luck because I cant filter the results without a where statement:   | inputlookup Lookup_Table.csv | stats values(ProductVersion) AS Version values(LifeCycleStatus) AS Status by ProductCode | foreach ProductCode [eval newLifecycleStatus=case(Version==[| inputlookup Lookup_Table.csv | eval NewProductCode=tostring('<<FIELD>>') | sort -product_count | head 1 | eval ProductVersion="\"".ProductVersion."\"" | return $ProductVersion], "Emerging")]    
Hi,   We have this same log entry, 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Writing ["ping"] 2021... See more...
Hi,   We have this same log entry, 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Writing ["ping"] 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Using 'text/plain', given [*/*] and supported [text/plain, */*, text/plain, */*, application/json, application/*+json, application/json, application/*+json] I want to mask MethodProcessor string so it is not visible in logs.  Can someone provide the regex i can use?    
Hello, I currently have a search over index_A that runs a sub-search from index_B looking to match a field (field_B) from index_B to any log within index_A.  The search works great but the only frust... See more...
Hello, I currently have a search over index_A that runs a sub-search from index_B looking to match a field (field_B) from index_B to any log within index_A.  The search works great but the only frustration is not knowing what field value that field_B held as all of the tabled results come from index_A.  Is there a way I can join that matched field_B to the results at the end of the search?  Here is my current search and thanks for anyone that has the time to help me with this! index=index_A [search index=index_B | fields field_B | rename field_B as query] | table field_A field_A1 field_A2 field_A3  
Hello, We are attempting to ingest csv files from two different applications where the file name and structure are identical. The files are placed into two different file directories on a heavy forw... See more...
Hello, We are attempting to ingest csv files from two different applications where the file name and structure are identical. The files are placed into two different file directories on a heavy forwarder and they contain different sets of data. The challenge we are having is the file being sent to the prod directory is being ingested, but the file in the dev directory is not or  neither file gets ingested.  Since the file names are identical and delivered to each directory at the same time, I'm thinking this is causing issues with one or both files not being ingested. Below is how we configured our Props config and it does seem to work, but not consistently.  Any Help would be appreciated!     #Production [batch://C:\Import\Prod\*.csv] index = test sourcetype = test move_policy = sinkhole #Development [batch://C:\Import\Dev\*.csv] index = testb sourcetype = testb move_policy = sinkhole      
my splunk version is 7.3 But Splunk left Russia and is not supported here. if i upgrade to version 8. that is, it is likely that the company’s splunk will be blocked. We have an official license. Is ... See more...
my splunk version is 7.3 But Splunk left Russia and is not supported here. if i upgrade to version 8. that is, it is likely that the company’s splunk will be blocked. We have an official license. Is it possible to update the splunk so that my splank does not go to the central office and is not blocked.  
Auditors are looking for updated AOC for Splunk. Where can we find this document from Splunk?
As a deep dive into my data sources / data integrity. I need to learn what SPLs /Apps needs to be used for this purpose. I appreciate your help.
I was getting SSL error due to a self signed certificate on port 8089. This certificate has been replaced with a DigiCert signed certificate. With updated certificate I am able to connect to Splunk A... See more...
I was getting SSL error due to a self signed certificate on port 8089. This certificate has been replaced with a DigiCert signed certificate. With updated certificate I am able to connect to Splunk API on 8089 from my local desktop. However, when I am still getting SSL error when connecting from our application server. I validated that ports 8089 and 443 is open from app server. I get to Splunk on port 443 from app server but when trying to connect on port 8089, getting SSL error.  Please help me what could be causing this and how to resolve this error.
Guys, how are you doing ? I have a challenge to get some data coming from some searchs on a dashboard in splunk, put it into a table i think in dbconnect and share thsi data to AWS Redshift cluste... See more...
Guys, how are you doing ? I have a challenge to get some data coming from some searchs on a dashboard in splunk, put it into a table i think in dbconnect and share thsi data to AWS Redshift cluster. Do you know how i can do that ? I am trying to found something in internet, but any path, solution, idea, or documentation is gone be usefull ! Thanks in advance.
I've recently installed an add on in my dev instance which created various fields, including user and NormalizedUser. I have a one-time csv file that has a list of users I need imported into one or b... See more...
I've recently installed an add on in my dev instance which created various fields, including user and NormalizedUser. I have a one-time csv file that has a list of users I need imported into one or both of those existing fields. Is this possible? It doesn't seem to do so and I prefer not to have to search against multiple fields (I'd like to do a query against the Add On's index for user or NormalizedUser to only retrieve the entire list of users or NormalizedUser's.)  Currently it seems to input the csv data into some other field name and I don't even know where it's grabbing that field name from.  The field header on the csv column is NormalizedUser Any help appreciated
Hi Team Splunk is unable to read a file which has particular content as below.  If the file contains other content, then splunk is able to get the content. Not sure what is wrong with this content.... See more...
Hi Team Splunk is unable to read a file which has particular content as below.  If the file contains other content, then splunk is able to get the content. Not sure what is wrong with this content. if the reorder the lines, then also it is able to recognize. PFB file content.  ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|TOD Ref. Type [IC] not set for scheme [MMSAA]|| ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|There is no transaction for the combination [02-08-2021 00:00:00] and [   M12345]. The posting of transaction failed. Transaction ID: [  M12345]|| ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|The posting failed.|| Error for this file in splunk logs ERROR TailReader - File will not be read, seekptr checksum did not match (file=<FullPath_of_file_with file name>). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at <website of splunk> for more info.
Hi,  I am trying to export PDF in Splunk Security Essential App --> Analytics Advisor --> Mitre ATT&CK Framework --> Export PDF But the data in PDF is not in a proper format. Please help here. PDF... See more...
Hi,  I am trying to export PDF in Splunk Security Essential App --> Analytics Advisor --> Mitre ATT&CK Framework --> Export PDF But the data in PDF is not in a proper format. Please help here. PDF Screenshot Dashboard Screenshot      
I am trying to tell where to look for time stamp & make sure time is current & are synched across my Splunk & ES environment. I appreciate your time & response in advance.
Hello, I have seen multiple posts related to large lookup files delaying the replication in a distributed environment. In my case I have a lookup table of around 120MB that is used on an automatic ... See more...
Hello, I have seen multiple posts related to large lookup files delaying the replication in a distributed environment. In my case I have a lookup table of around 120MB that is used on an automatic lookup table so it has to be replicated to the search peers. The lookup table file is static and rarely changed. My questions are: - Once the replication bundle syncs successfully, will Splunk SH try to replicate it again to peers if there is no change has been found? - If file changes only by few lines/records, will Splunk try to replicate the delta from the previous state? Bandwidth is limited so I don't want to have a bottleneck during operations. Thank you in advance for your time. With kind regards. Chris