All Topics

Top

All Topics

I am trying to control ingest rate into Splunk Cloud. I have some firewalls that are very chatty. The firewalls themselves can only point to a single Syslog destination.  For security and complian... See more...
I am trying to control ingest rate into Splunk Cloud. I have some firewalls that are very chatty. The firewalls themselves can only point to a single Syslog destination.  For security and compliance issues, I need to retain and store ALL logs for one year. We have an appliance that forwards to our SOC and it basically has unlimited storage. For reporting and alerting, I need to send most messages into Splunk Cloud. Logging is controlled by ACL and in the syslog messages, you see ACLs. Based on how my firewall is configured, there are a few ACLs that are chattier than others, for example, the implicit deny ACL is CONSTANTLY chatting. The only time I really need to see this ACL in Splunk logs, is when I am troubleshooting however the SOC wants to see this ACL all the time. The implicit deny rule accounts for about 30% of all syslog data generated. Ideally I when I write to disk on the Syslog-NG server, I would like to drop the implicit deny logs so that when the Universal Forwarder reads the log, it won't be sending that unneeded 30% overhead (the implicit deny rule accounts for about 20-50 gigs of ingest a day alone).  My initial log_path statement looks like the following:       log { source(s_udp514); filter(f_device); destination(d_socappliance); destination(d_disk); flags(final); };         I then tried 2 different log path statements to try and separate the traffic so that I can apply the message drop filter:       filter f_device { ( host("192.168.1.1") or host("fqdn.device.com") ) }; filter f_device_msgdrop { ( not match("aclID=0" value(MESSAGE)); ) }; log { source(s_udp514); filter(f_device); destination(d_socappliance); flags(final); }; log { source(s_udp514); filter(f_device);filter(f_device_msgdrop); destination(d_disk); flags(final); };         aclID=0 is the ACL ID of the implicit deny rule. The concept here is that if the string "aclID=0" exists in the syslog message, I don't want to write it to disk and therefore the Universal Forwarder never sees in in the log file and it doesn't get sent to the cloud.  When I use the method above, I end up disabling logging to disk. I haven't verified if logging to the SOC appliance stops as well. Any thoughts on how to tackle this? 
We have developed a Splunk client app using Splunk Java SDK. A particular Splunk server installation has a few indexes with data stored in them. When getIndexes() is called in com.splunk.Service cl... See more...
We have developed a Splunk client app using Splunk Java SDK. A particular Splunk server installation has a few indexes with data stored in them. When getIndexes() is called in com.splunk.Service class object, it returns an empty CollectionIndexes. However, on the same Service object, we can search data in the indexes and it is certain that those indexes do exist. IndexCollection indexCollection = service.getIndexes(); boolean indexNotFound = indexCollection.isEmpty(); I got the same results with Java SDK 1.5.0.0 and 1.6.5.0. This is happening in a particular Splunk server installation. For other Splunk server installations, we do not have any issue. Under what condition could this issue happen? What can I do to troubleshoot?
I am building a search that will based on a table of products with different versions. I need to run an initial search that will return the version with most hosts ("Mainstream") and use that version... See more...
I am building a search that will based on a table of products with different versions. I need to run an initial search that will return the version with most hosts ("Mainstream") and use that version to compare everything else against in order to determine if its less than/greater than (older/newer). I am currently using a foreach command to send each category of product to a subsearch which then grabs mainstream and return it so I can compare each event's version to this mainstream version. I am having extreme difficulty passing the field to the subsearch and filtering out the category by using something like a Where command without setting off confusing errors that don't really make any sense. ("Eval command malformed"). The logic of the query works when I am not using a '<<field>>' token but soon as I try pass a token with a where command within subsearch, it falls apart. I am a Splunk newbie so maybe I am missing something obvious, please advise:   | inputlookup Lookup_Table.csv | eval Category = OSType. "-" .ProductName | stats values(ProductVersion) AS Version values(LifeCycleStatus) AS Status by Category | foreach Category [eval newLifecycleStatus=case(Version< [| inputlookup Lookup_Table.csv.csv | eval Category = OSType. "-" .ProductName | where Category =='<<FIELD>>' | sort -product_count | head 1 | eval Version="\"".ProductVersion."\"" | return $Version], "Declining")]    I changed this code to something like this with no luck because I cant filter the results without a where statement:   | inputlookup Lookup_Table.csv | stats values(ProductVersion) AS Version values(LifeCycleStatus) AS Status by ProductCode | foreach ProductCode [eval newLifecycleStatus=case(Version==[| inputlookup Lookup_Table.csv | eval NewProductCode=tostring('<<FIELD>>') | sort -product_count | head 1 | eval ProductVersion="\"".ProductVersion."\"" | return $ProductVersion], "Emerging")]    
Hi,   We have this same log entry, 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Writing ["ping"] 2021... See more...
Hi,   We have this same log entry, 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Writing ["ping"] 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Using 'text/plain', given [*/*] and supported [text/plain, */*, text/plain, */*, application/json, application/*+json, application/json, application/*+json] I want to mask MethodProcessor string so it is not visible in logs.  Can someone provide the regex i can use?    
Hello, I currently have a search over index_A that runs a sub-search from index_B looking to match a field (field_B) from index_B to any log within index_A.  The search works great but the only frust... See more...
Hello, I currently have a search over index_A that runs a sub-search from index_B looking to match a field (field_B) from index_B to any log within index_A.  The search works great but the only frustration is not knowing what field value that field_B held as all of the tabled results come from index_A.  Is there a way I can join that matched field_B to the results at the end of the search?  Here is my current search and thanks for anyone that has the time to help me with this! index=index_A [search index=index_B | fields field_B | rename field_B as query] | table field_A field_A1 field_A2 field_A3  
Hello, We are attempting to ingest csv files from two different applications where the file name and structure are identical. The files are placed into two different file directories on a heavy forw... See more...
Hello, We are attempting to ingest csv files from two different applications where the file name and structure are identical. The files are placed into two different file directories on a heavy forwarder and they contain different sets of data. The challenge we are having is the file being sent to the prod directory is being ingested, but the file in the dev directory is not or  neither file gets ingested.  Since the file names are identical and delivered to each directory at the same time, I'm thinking this is causing issues with one or both files not being ingested. Below is how we configured our Props config and it does seem to work, but not consistently.  Any Help would be appreciated!     #Production [batch://C:\Import\Prod\*.csv] index = test sourcetype = test move_policy = sinkhole #Development [batch://C:\Import\Dev\*.csv] index = testb sourcetype = testb move_policy = sinkhole      
my splunk version is 7.3 But Splunk left Russia and is not supported here. if i upgrade to version 8. that is, it is likely that the company’s splunk will be blocked. We have an official license. Is ... See more...
my splunk version is 7.3 But Splunk left Russia and is not supported here. if i upgrade to version 8. that is, it is likely that the company’s splunk will be blocked. We have an official license. Is it possible to update the splunk so that my splank does not go to the central office and is not blocked.  
Auditors are looking for updated AOC for Splunk. Where can we find this document from Splunk?
Changes for Internet Explorer and Firefox browsers, and deprecating support for Chrome versions 64 and 83 On October 18, 2021, we will implement synthetics monitoring behavioral changes for Intern... See more...
Changes for Internet Explorer and Firefox browsers, and deprecating support for Chrome versions 64 and 83 On October 18, 2021, we will implement synthetics monitoring behavioral changes for Internet Explorer (IE) 11 and Firefox, and deprecating support for Chrome versions 64 and 83 browsers. This change is part of an effort to bring our synthetics monitoring to a single browser architecture, allowing us to innovate faster and provide more value-added capabilities to better support our customer needs.  This guide will help you make needed changes to avoid any disruption in your services. Contents What is changing and when? Why deprecate IE 11, Firefox, and Chrome versions 64 and 83? How do I execute jobs using Headless Chrome?  If I am using Chrome 64 or 83, what actions do I need to take? How can I tell what browsers I am currently using?  How do I manually move jobs to Chrome 86? Are there any additional resources? What is changing and when? On September 13, 2021, we announced the depreciation of certain browser versions previously supported with our synthetic monitoring capabilities, effective October 18, 2021. These versions include: Internet Explorer (IE) 11, Firefox, and Chrome versions 64 and 83 browsers.  To ensure you have adequate time to make the necessary changes, we will keep a  migration window open until November 18, 2021, to assist with any support issues. Why deprecate IE 11, Firefox, and Chrome versions 64 and 83? Internet Explorer 11: Microsoft has announced that they will be ending support for Internet Explorer starting June 15, 2022. To comply with our Information security policy, AppDynamics will not support the deprecated 3rd party software IE 11, Firefox, and Chrome 64 and 83: Chrome, Firefox, and Edge browsers all use the Chrome engine. so availability and performance data will be similar to the Headless Chrome browser. For Chrome specifically, this is part of our continued development and support for the more modern version of the browser.    How will using Headless Chrome impact jobs executed with IE and Firefox? On October 18, jobs currently being executed in IE or Firefox will automatically start being executed in headless Chrome 86, with IE 11 and Firefox user agent. While customers will not lose the ability to monitor the availability and performance in IE 11 and Firefox, this change could impact any custom scripts you may be running (see below for details). NOTE: Based on our analysis, the availability and performance of jobs executed in headless Chrome with IE 11 and Firefox user agent is similar to the availability and performance of jobs executed in native IE 11 and Firefox. If I am using Chrome 64 or 83, what actions do I need to take? You will need to move to Chrome 86 before the deprecation either manually (see the details below), or by reaching out to us through a Support ticket. Move the jobs to Chrome 86: Customers are encouraged to move jobs from IE 11, Firefox; and required to move Chrome versions 64 and 83  to Chrome 86 before we make the described changes. This can be done manually (instructions below) or by submitting a support ticket. Update the scripts: Many customers use scripts to leverage synthetics monitoring. For example, selenium-based python scripts to create synthetic monitoring jobs. Some of these scripts may break or fail with timeout errors when the jobs are migrated to Chrome. This would most likely be due to an incompatibility between the selenium commands and the Chrome web driver used in Chrome 86. For example, the selenium commands supported in Chrome 64 may not be supported in Chrome 86. Fixing these issues may require you to update existing scripts. Update timeout and alerting configurations: Customers may also have to address changes in performance between the existing browser and Chrome 86. This could require some fine-tuning of your current timeout and alerting configurations. On-premises customer only: Update the Synth Server to a version equal or newer than 21.4.2 and update to the latest version of Linux PSA 21.9 For details about other changes in Chrome 86, refer to our Chrome 86 documentation. How can I tell what browsers I'm currently using? To determine which browsers you are currently using: In the User Experience section, under Synthetics, click Sessions From there, use Fields to filter using the Browser and Browser Version options How do I manually move jobs to Chrome 86? Repeat this process for each of your jobs: Start by going to the specific application which has enabled synthetics monitoring, and go to Jobs.  Then select the specific job and click Edit for options to update the browser type. Once the pop-up window for the new job appears, click on the Browser button. From there, scroll down to the Select Browser configuration option, check Chrome and select version 86 from the dropdown. Click the Save button in the bottom right corner. Are there any additional resources? Browser Synthetic Monitoring Documentation Browser Synthetic Monitoring: Chrome 86 Support Documentation  Alerts for Browser Synthetic Monitoring Browser Synthetic Scripts Documentation  AppDynamics Support
As a deep dive into my data sources / data integrity. I need to learn what SPLs /Apps needs to be used for this purpose. I appreciate your help.
I was getting SSL error due to a self signed certificate on port 8089. This certificate has been replaced with a DigiCert signed certificate. With updated certificate I am able to connect to Splunk A... See more...
I was getting SSL error due to a self signed certificate on port 8089. This certificate has been replaced with a DigiCert signed certificate. With updated certificate I am able to connect to Splunk API on 8089 from my local desktop. However, when I am still getting SSL error when connecting from our application server. I validated that ports 8089 and 443 is open from app server. I get to Splunk on port 443 from app server but when trying to connect on port 8089, getting SSL error.  Please help me what could be causing this and how to resolve this error.
Guys, how are you doing ? I have a challenge to get some data coming from some searchs on a dashboard in splunk, put it into a table i think in dbconnect and share thsi data to AWS Redshift cluste... See more...
Guys, how are you doing ? I have a challenge to get some data coming from some searchs on a dashboard in splunk, put it into a table i think in dbconnect and share thsi data to AWS Redshift cluster. Do you know how i can do that ? I am trying to found something in internet, but any path, solution, idea, or documentation is gone be usefull ! Thanks in advance.
I've recently installed an add on in my dev instance which created various fields, including user and NormalizedUser. I have a one-time csv file that has a list of users I need imported into one or b... See more...
I've recently installed an add on in my dev instance which created various fields, including user and NormalizedUser. I have a one-time csv file that has a list of users I need imported into one or both of those existing fields. Is this possible? It doesn't seem to do so and I prefer not to have to search against multiple fields (I'd like to do a query against the Add On's index for user or NormalizedUser to only retrieve the entire list of users or NormalizedUser's.)  Currently it seems to input the csv data into some other field name and I don't even know where it's grabbing that field name from.  The field header on the csv column is NormalizedUser Any help appreciated
Hi Team Splunk is unable to read a file which has particular content as below.  If the file contains other content, then splunk is able to get the content. Not sure what is wrong with this content.... See more...
Hi Team Splunk is unable to read a file which has particular content as below.  If the file contains other content, then splunk is able to get the content. Not sure what is wrong with this content. if the reorder the lines, then also it is able to recognize. PFB file content.  ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|TOD Ref. Type [IC] not set for scheme [MMSAA]|| ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|There is no transaction for the combination [02-08-2021 00:00:00] and [   M12345]. The posting of transaction failed. Transaction ID: [  M12345]|| ACCNT|AB10012345|1234567890ABC4567890123456789012|INR|C|01-07-2021 00:00:00|30-07-2021 00:00:00|The posting failed.|| Error for this file in splunk logs ERROR TailReader - File will not be read, seekptr checksum did not match (file=<FullPath_of_file_with file name>). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at <website of splunk> for more info.
Hi,  I am trying to export PDF in Splunk Security Essential App --> Analytics Advisor --> Mitre ATT&CK Framework --> Export PDF But the data in PDF is not in a proper format. Please help here. PDF... See more...
Hi,  I am trying to export PDF in Splunk Security Essential App --> Analytics Advisor --> Mitre ATT&CK Framework --> Export PDF But the data in PDF is not in a proper format. Please help here. PDF Screenshot Dashboard Screenshot      
I am trying to tell where to look for time stamp & make sure time is current & are synched across my Splunk & ES environment. I appreciate your time & response in advance.
Hello, I have seen multiple posts related to large lookup files delaying the replication in a distributed environment. In my case I have a lookup table of around 120MB that is used on an automatic ... See more...
Hello, I have seen multiple posts related to large lookup files delaying the replication in a distributed environment. In my case I have a lookup table of around 120MB that is used on an automatic lookup table so it has to be replicated to the search peers. The lookup table file is static and rarely changed. My questions are: - Once the replication bundle syncs successfully, will Splunk SH try to replicate it again to peers if there is no change has been found? - If file changes only by few lines/records, will Splunk try to replicate the delta from the previous state? Bandwidth is limited so I don't want to have a bottleneck during operations. Thank you in advance for your time. With kind regards. Chris
I set up a sample VM for myself to test out Splunk configuration. I wanted a stand-alone service just to make sure I can get my basic configuration running and forward logs from a Kubernetes instance... See more...
I set up a sample VM for myself to test out Splunk configuration. I wanted a stand-alone service just to make sure I can get my basic configuration running and forward logs from a Kubernetes instance. However, I am stuck in verification of the event receive resource. Here's the steps I followed: Setup a Linux VM Get Splunk installed Confirm web is working as expected Create an index called splunk_test_events that is of (Type: events, App: search) Go to Settings > Forwarding and Receiving and set up a port for 9997 In Settings > Data Inputs set up an HTTP Event Collector (details below) Ensure tokens are enabled (I forget where this was) Restart Splunk SSH into the machine and check the running ports (see below) Attempt to curl and event So the HTTP Event Collector I set up as: Name splunk_testing_events Source Type Entered Source Type Selected Allowed Indexes splunk_test_events Default Index splunk_test_events Output Group None Enable Indexer Acknowledgement On    I verified that the HTTP Event Collector is enabled. I log into the machine and check the ports that are active: $ sudo lsof -i -P -n | grep LISTEN systemd-r 649 systemd-resolve 13u IPv4 23727 0t0 TCP 127.0.0.53:53 (LISTEN) sshd 751 root 3u IPv4 26648 0t0 TCP *:22 (LISTEN) sshd 751 root 4u IPv6 26650 0t0 TCP *:22 (LISTEN) splunkd 6405 root 4u IPv4 63003 0t0 TCP *:8089 (LISTEN) splunkd 6405 root 60u IPv4 63818 0t0 TCP *:9997 (LISTEN) splunkd 6405 root 128u IPv4 123397 0t0 TCP *:8088 (LISTEN) splunkd 6405 root 156u IPv4 64895 0t0 TCP *:8000 (LISTEN) mongod 6482 root 10u IPv4 61364 0t0 TCP *:8191 (LISTEN) python3.7 6623 root 7u IPv4 63884 0t0 TCP 127.0.0.1:8065 (LISTEN)   Now I try and send a curl event over: curl -v -k -H "Authorization: Splunk GENERATED_HEC_TOKEN" http://VM_PUBLIC_IP:9997/services/collector/event -d '{ "event": "testing manually" }'    I get back an error: * Trying VM_PUBLIC_IP:9997... * Connected to VM_PUBLIC_IP (VM_PUBLIC_IP) port 9997 (#0) > POST /services/collector/event HTTP/1.1 > Host: VM_PUBLIC_IP:9997 > User-Agent: curl/7.74.0 > Accept: */* > Authorization: Splunk GENERATED_HEC_TOKEN > Content-Length: 31 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 31 out of 31 bytes * Empty reply from server * Connection #0 to host VM_PUBLIC_IP left intact curl: (52) Empty reply from server   I tried some of the other ports: 8088: Connection reset by peer 8089: Connection reset by peer 8000: HTTP/1.1 303 (which I expected in this case) What am I doing wrong here? 
I see the following errors when running a search against data in a vix. We have recently upgraded to 8.1.3 when I assume the thirdparty jar files changed from 1.10 to 1.19. I think there is som... See more...
I see the following errors when running a search against data in a vix. We have recently upgraded to 8.1.3 when I assume the thirdparty jar files changed from 1.10 to 1.19. I think there is some config that is pointing to the old 1.10.jar file I have looked in indexes.conf for the vix configuration which references the path  changed it to the new version of commons-compress-1.19.jar (see below) and deployed it to the SHC, however it does not seem to make any difference.  Can anyone help?
I need your help to backup the entire set of the .conf files in Splunk Ent. & ES separately please. Can this backup be scheduled? Is scheduled back up here recommended? Thanks a million in advance.