All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I wanted to represent the incident data of total number opened and closed status biweekly. Please help
How can we Stop Docker from sending these logs? We recently disable the ingestion from Docker to Splunk on the Splunk HEC settings. But after we disable and delete the HEC settings in Splunk this i... See more...
How can we Stop Docker from sending these logs? We recently disable the ingestion from Docker to Splunk on the Splunk HEC settings. But after we disable and delete the HEC settings in Splunk this issue occurs. 01-02-2023 09:33:13.494 -0800 ERROR HttpInputDataHandler [54154 HttpDedicatedIoThread-0] - Failed processing http input, token name=n/a, channel=n/a, source_IP=10.22.100.6, reply=4, events_processed=0, http_input_body_size=291831, parsing_err="" 01-02-2023 09:33:13.379 -0800 ERROR HttpInputDataHandler [54154 HttpDedicatedIoThread-0] - Failed processing http input, token name=n/a, channel=n/a, source_IP=10.22.100.6, reply=4, events_processed=0, http_input_body_size=225158, parsing_err="" We are getting almost 5,000 ERROR every day.  We try to delete the daemon.json in the docker https://docs.docker.com/config/containers/logging/splunk/ But the docker is still sending error logs.
Hello, The question is pretty simple, is there any way to query a KVstore to be able to find the last time that KVstore was updated? I know how to do what for an Index but the query doesn't work ... See more...
Hello, The question is pretty simple, is there any way to query a KVstore to be able to find the last time that KVstore was updated? I know how to do what for an Index but the query doesn't work for KVstores Thank you
Hi How can we find out the list of universal forwarders sending data to Splunk? Also, how do we ensure that all the UF that have been configured are sending data to Splunk? Thank you so much in ad... See more...
Hi How can we find out the list of universal forwarders sending data to Splunk? Also, how do we ensure that all the UF that have been configured are sending data to Splunk? Thank you so much in advance
I can't set up the sending of Windows logs with Splunk Event Log. No information appears on the console SlunkCloud, either the logs or just the computer name. I followed the procedure by installing a... See more...
I can't set up the sending of Windows logs with Splunk Event Log. No information appears on the console SlunkCloud, either the logs or just the computer name. I followed the procedure by installing and configuring Universal Forwarder, and installing the add-on on the console, but it doesn't seem to work. Do you have any suggestions on how to solve this problem ?
Antibot related logs are not appearing in the  datamodel results when I run a search query using below datamodel based. Could you please guide me how to fix this issue. Thank you. | from datamodel:... See more...
Antibot related logs are not appearing in the  datamodel results when I run a search query using below datamodel based. Could you please guide me how to fix this issue. Thank you. | from datamodel:"Intrusion_Detection".AntiBot | search Gateway=xxxxxx But when I run a search query using below index based, logs are able to see it. index=checkpoint product=Anti-Bot signature!="" severity IN (High, critical) confidence_level=low Below is the sample log line. time=1672655849|hostname=xxxx|severity=High|confidence_level=Low|product=Anti-Bot|action=Detect|ifdir=outbound|ifname=eth3|loguid={0x5127e871,0xbd548381,0xe17d3047,0x8b1277fc}|origin=x.x.x.x|originsicname=CN\=XXXXX,O\=XXXXXX|sequencenum=11|time=1672655849|version=5|dns_message_type=Query|dst=X.X.X.X|lastupdatetime=1672658788|log_id=2|malware_action=Trying to locate a C&C|malware_rule_id={XXXXX}|malware_rule_name=Anti-Bot Prevent Mode|policy=XXXX|policy_time=1668791496|protection_id=XXX|protection_name=XXXXX|protection_type=DNS reputation|proto=17|question_rdata=XXX|received_bytes=0|resource=technetium.network|rule_name=XXX|rule_uid=XXXX|s_port=53361|scope=x.x.x.x|sent_bytes=0|service=xx|session_id={0x63b26e99,0x11,0x5f17f465,0xc5683bca}|smartdefense_profile=XXXX Standard Anti-bot - Prevent|src=x.x.x.x|suppressed_logs=10|tid=57558|layer_name=IPS|layer_name=IPS|layer_name=IPS|layer_uuid={xxxx}|layer_uuid={xxxx}|layer_uuid={xxxx}|layer_uuid={xxx}|layer_uuid={xxxx}|layer_uuid={xxxx}|malware_rule_id={xxxxx}|malware_rule_id={xxxxx}|malware_rule_id={xxxxx}|malware_rule_id={xxxxx}|malware_rule_id={xxxxx}|malware_rule_id={xxxxx}|malware_rule_name=IPS - Prevent Profile|malware_rule_name=Anti-Bot Prevent Mode|malware_rule_name=IPS - Prevent Profile|malware_rule_name=Anti-Bot Prevent Mode|malware_rule_name=IPS - Prevent Profile|malware_rule_name=Anti-Bot Prevent Mode|smartdefense_profile=XXXX Standard IPS - Prevent|smartdefense_profile=XXX Standard Anti-bot - Prevent|smartdefense_profile=xxxxx Standard IPS - Prevent|smartdefense_profile=xxxxx Standard Anti-bot - Prevent|smartdefense_profile=xxxxx Standard IPS - Prevent|smartdefense_profile=xxxxx Standard Anti-bot - Prevent  
Hi, When using the "Insert link" option in "Markdown Text" in Dashboard Studio I would like the link/URL to open in a new Window tab. I found below suggestions however neither of these options seems... See more...
Hi, When using the "Insert link" option in "Markdown Text" in Dashboard Studio I would like the link/URL to open in a new Window tab. I found below suggestions however neither of these options seems to work. [link](url){:target="_blank"}  <a href="http://example.com/" target="_blank">Hello, world!</a> Any suggestions?
I configured the Splunk triggered actions slack and datadog events but I am getting only slack notification but datadog events not creating or triggering   Below Is the configuration   
I have a use case where i would need to use regex to extract values only if a condition is met.         index=sample [search index=sample key=my_key |table msg host] | rex max_match=0 fie... See more...
I have a use case where i would need to use regex to extract values only if a condition is met.         index=sample [search index=sample key=my_key |table msg host] | rex max_match=0 field=_raw "a\d=\"(?<test>.*?)\"" | eval a = if(len(a)>255 OR isnull(a),"*Regex and if statements need to be here*",a) | stats values(test) as test by msg host             The aim is to use regex inside the if statement . The logic is if len(a) or a is null then use regex and populate the value test. I am looking for the same functionality as match() but instead of bool value I need the matched results. Is there any way to get this functionality?
Hi All, I want to delete few services and its entities from splunk ITSI search head. If I delete the services directly , does it remove all the associated entities ?  Regards, PNV
 
msiexec.exe /qn /I splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLOYMENT_SERVER="10.0.0.7:8089" SPLUNKUSERNAME=Admin SPLUNKPASSWORD=S@M3!! AGREETOLICENSE=Yes  LAUNCHSPLUNK=0 This appears ... See more...
msiexec.exe /qn /I splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLOYMENT_SERVER="10.0.0.7:8089" SPLUNKUSERNAME=Admin SPLUNKPASSWORD=S@M3!! AGREETOLICENSE=Yes  LAUNCHSPLUNK=0 This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y -- Migration information is being logged to 'c:\program files\splunkuniversalforwarder\var\log\splunk\migration.log.2022-12-31.15-42-09' -- Migrating to: VERSION=9.0.2 BUILD=17e00c557dc1 PRODUCT=splunk PLATFORM=Windows-AMD64   It seems that the Splunk default certificates are being used. If certificate validation is turned on using the default certificates (not-recommended), this may result in loss of communication in mixed-version Splunk environments after upgrade. "c:\program files\splunkuniversalforwarder\etc\auth\ca.pem": already a renewed Splunk certificate: skipping renewal "c:\program files\splunkuniversalforwarder\etc\auth\cacert.pem": already a renewed Splunk certificate: skipping renewal Failed to start mongod. Did not get EOF from mongod after 5 second(s). [App Key Value Store migration] Starting migrate-kvstore. Created version file path=c:\program files\splunkuniversalforwarder\var\run\splunk\kvstore_upgrade\versionFile36 [App Key Value Store migration] Collection data is not available. ERROR - Failed opening "c:\program files\splunkuniversalforwarder\va      
on version 9.0.0 when you go to Manage Apps in the web UI and then click on the Name column arrow to sort your apps by name it just doesn't sort them correctly, have any of you noticed that? 
Though not an emergency yet, I am hoping to make a decision on one of the two following options soon: 1. Double down on the current strategy of adding indexers in a cluster behind a load balancer a... See more...
Though not an emergency yet, I am hoping to make a decision on one of the two following options soon: 1. Double down on the current strategy of adding indexers in a cluster behind a load balancer and assigning an external port for each additional indexer 2. Define and pursue an alternative which would allow for adding indexers in our indexer cluster and UFs without having to resolve connectivity challenges associated with not having all futuer ports allowed across the entire enterprise. First I'll describe our situation for context and then I'll ask the question: The situation at our large client is that there are 10s of 1000s of Universal Forwarders across the enterprise and not all parts of the networks(s) allow connectivity to our port range. For the sake of conversation let's say the port range is  10000-10019 on 2 IP addresses: 1 for a test environment and 1 for a prod environment. Prod is the main concern here as we will not be adding indexers to the test environment.  Though we don't have 20 indexers yet, that would be a reasonable upper limit for currently anticipated scope. For the sake of the question, let's say we have 8 indexers. Each port externally maps like this: Prod.address:10000 - idx01:9996 prod.address:10001 - idx02:9996  etc... for a total of 8 in production.   However, earlier in the deployment there were only 4 indexers. Perhaps not always were firewall requests put in to consistently open all 20 ports instead of only the 4 which were online at that time.  Firewall teams like to be able to test to verify connectivity rahter than to allow future needed connectivity and thereby save themselves the trouble of the imminent 70,000+ firewall requests which will be needed to open up 20 ports across as many hosts... (And perhaps it would be better to simply hvae this allowed across the enterprise as part of my Option #1 above) My understanding is that Option 2 is not an option, because any strategy for presenting only the following would preclude the UFs being able to have their special conversation with each indexer which is a part of Splunk's own particular way to balance load.  e.g.: prod.ip:9996 :  round-robin TCP (or whatever makes sense): idx1:9666, idx2:9996, idx3, 9996.... Short of redoing absolutely everything and moving to Heavy Forwarders behind a load balancer, I believe there is not another way of doing this. The biggest impact of moving to Heavy Forwarders would be having to re-onboard 1000s of custom applications in addition to planning cutover from one of collecting logs to the other in waves of applications. So my question is, are there any alternatives for load balancing across multiple indexers which would allow us to use only one of our existing ports?   Thanks!  
I would like to display multiple values in multiple pie charts. For example: I want to display (Consumption & Remaining_Utilization) for each PowerStation in trellis mode Using the following (pa... See more...
I would like to display multiple values in multiple pie charts. For example: I want to display (Consumption & Remaining_Utilization) for each PowerStation in trellis mode Using the following (partial) query index="mtx" source="*Dual_Station*" MTX="*" PowerStation="*" | eval Remaining_Utilization = Capacity - Consumption | stats values(Remaining_Utilization) as Remaining_Utilization , values(Consumption) as Consumption by PowerStation (What is missing after (stats)? I checked each an every thread related to Pie charts in trellis mode in the community and couldn't find any answer   Hint: I use the following query to draw a single pie chart index="mtx" source="*Dual_Station*" MTX="MTX_Name" PowerStation="PowerStation_Name" | eval Remaining_Utilization = Capacity - Consumption | chart values(Consumption) as Consumption over Remaining_Utilization | transpose    
I'm having an issue with one of my monitored paths.  Here's the monitor stanza, the blacklist line should only blacklist one file in a directory of about 420 log files: [monitor:///logs/reg*/last/.... See more...
I'm having an issue with one of my monitored paths.  Here's the monitor stanza, the blacklist line should only blacklist one file in a directory of about 420 log files: [monitor:///logs/reg*/last/...] sourcetype = xxxx:Regional blacklist = xxxx_\d{4}-\d{2}-\d{2}\.log index = xxxx disabled = false crcSalt = <SOURCE> The output of splunk list monitor shows me all the files I expect to see based on the above stanza.  Splunkd.log shows no problems reading any of them.  My problem is that when I search splunk, I'm missing all data from roughly 100 of the files, files that list monitor shows that I'm watching.   I recently added the crcSalt=<SOURCE> line thinking that would help, it has not.  Am I missing something obvious?
New customer seeking guidance for creating indexes/sourcetypes and determining granularity.  Primarily we're looking for deeper guidance on why more so than what.  We have a large, complex environmen... See more...
New customer seeking guidance for creating indexes/sourcetypes and determining granularity.  Primarily we're looking for deeper guidance on why more so than what.  We have a large, complex environment. Our naming scheme for indexes thus far is: organization_category_purpose (ex acme_net_fw) organization - unique to us, required, primarily used to segment data between organizations. category - broad, like network, application, endpoint, etc purpose - more specific, largely unique per category Does the following seem best practice, for firewalls? 2 or 3 indexes used by firewalls (traffic, operations, maybe threats?) Multiple sourcetypes split into the various indexes We are looking at SC4S as a guide (https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/PaloaltoNetworks/panos/) although their examples are not always consistent. We are struggling to determine how granular to be with the purpose of the index and with the amount of possible sourcetypes we can/will have.  We do not have the need to specify sensitivity or retention time.  Furthermore, we do not have the need to separate security/infrastructure teams. This slide from a Splunk presentation suggests that many sourcetypes get their own index for efficiency: Questions With 4-5 separate firewall products in use in one organization (the most complex), we're looking at 20-25 unique sourcetypes distributed into around 3 firewall indexes, just for firewalls.  Does this sound correct? We want to avoid unnecessary complexity for future searches, documentation, etc while not destroying our efficiency. Can anyone speak into their experiences with creating too many/too few indexes?  Specifically on long-term organization, search efficiency, overall experience? Can anyone offer any additional real-world guidance on creating a data catalog? We can't see any reason to split up windows event logs for endpoints (security/application, etc) but could see security being separate from the others for DCs.  Does that sound correct? Any resources or guidance appreciated.  Here's what we're using so far: SC4S example structure: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/PaloaltoNetworks/panos/ https://lantern.splunk.com/Splunk_Success_Framework/Data_Management/Naming_conventions https://subscription.packtpub.com/book/data/9781789531091/5/ch05lvl1sec32/best-practices-for-administering-splunk https://kinneygroup.com/blog/the-proverbial-8-ball-splunk-implementation/
I have run across an edge case dealing with some f5 data.  Some times a nodes down can be reported one or more times before the nodes up occurs.  Currently setting up a transaction on pool and member... See more...
I have run across an edge case dealing with some f5 data.  Some times a nodes down can be reported one or more times before the nodes up occurs.  Currently setting up a transaction on pool and member name which should be unique, I end up with orphans records which aren't really orphans.  Is there some way to only have one transaction open per unique fields and skip the next match closing the transaction when it finds the endswith?  I know I could set keeporphans=false, but that would negate the whole purpose of this report which is to determine if a node is down. Here is what I am trying to do.   | makeresults | eval _raw = "Nov 19 2022 00:24:37 mcpd[9745]: 01070638:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-East:443 monitor status down. [ /den-dmz/ShapeMonitor: down; last error: /den-dmz/ShapeMonitor: Response Code: 307 (Moved Temporarily) @2022/11/19 00:24:37. ] [ was up for 0hr:0min:36sec ]" | eval _time=strptime("1668835477","%s" ) | append [| makeresults | eval _raw="Nov 19 2022 00:25:22 mcpd[9745]: 01070727:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-East:443 monitor status up. [ /den-dmz/ShapeMonitor: up ] [ was down for 0hr:0min:5sec ]" |eval _time=strptime("1668835522","%s" ) ] | append [| makeresults | eval _raw="Nov 19 2022 00:25:17 mcpd[9745]: 01070638:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-East:443 monitor status down. [ /den-dmz/ShapeMonitor: down; last error: /den-dmz/ShapeMonitor: Response Code: 307 (Moved Temporarily) @2022/11/19 00:25:17. ] [ was up for 0hr:0min:31sec ]" |eval _time=strptime("1668835517" , "%s" ) ] | append [| makeresults | eval _raw="Nov 19 2022 00:25:17 mcpd[9745]: 01070638:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-OTHER:443 monitor status down. [ /den-dmz/ShapeMonitor: down; last error: /den-dmz/ShapeMonitor: Response Code: 307 (Moved Temporarily) @2022/11/19 00:25:17. ] [ was up for 0hr:0min:31sec ]" |eval _time=strptime("1668835600" , "%s" ) ] `comment("Find Pool Name")` | rex field=_raw "(: Pool | ltm pool )(?<pool>.*?)( member| {)" `comment("Determine which member of Pool")` | rex field=_raw "(member |members delete { )(?<member>.*?)( monitor status| })" `comment("Determine Actually Status")` | rex field=_raw "monitor status (?<status>.*?)\." `comment("deal with up down time")` | eval timedown=if(status=="down", _time, null()) | eval timeup=if(status=="up", _time, null()) | fieldformat timedown=strftime(timedown,"%F %T") | fieldformat timeup=strftime(timeup,"%F %T") | sort 0 _time desc | transaction pool, member startswith=eval(status=="down") endswith=eval(status=="up") keeporphans=true | eval down_duration=if(isnull(timeup),now() - timedown, timeup - timedown) | fieldformat down_duration=tostring(down_duration,"duration") | table _time, pool, member, timedown, timeup, down_duration  
Hi at all, a question before starting a new configuration. I configured custom fields on some Universal Forwarders using _meta in inputs.conf and it correctly runs. In my architecture there's an i... See more...
Hi at all, a question before starting a new configuration. I configured custom fields on some Universal Forwarders using _meta in inputs.conf and it correctly runs. In my architecture there's an intermediate Forwarder that collects different _meta from Universal Forwarders and it correctly runs. If now I try to add _meta to some inputs on the Heavy Forwarder itself: in your opinion (and/or experience), must I configure _meta in each input stanza of the Heavy Forwarder or can I configure the [default] stanza without overriding values from the other Universal Forwarders? Thank you for your attention. Ciao. Giuseppe
Hello Splunkers!! I have below code for my Dashboard. In this code I have an issue with two panels. The two I have mentioned below. In "<title>Vulnerable panel" i have used drilldown and use that dr... See more...
Hello Splunkers!! I have below code for my Dashboard. In this code I have an issue with two panels. The two I have mentioned below. In "<title>Vulnerable panel" i have used drilldown and use that drill down in "<title>Vulnerabilities</title>" panel. But the drilldown is not working . My expectations once click any value from the above panel the below panel values will populate. Please suggest me some ideas on the same. Panel :<title>Vulnerable Hosts : Selected host is "$hostname$"</title>   Panel : <title>Vulnerabilities</title> Dashboard link:  https://drive.google.com/file/d/1UCguHdcAfIcz2QXOUJvvULxGE-YSDuVj/view?usp=drivesdk