All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We want to limit the ingestion of data that is coming from some sources (in this case the value would be in Properties.HostName) because they basically are not working correctly (customer machines) a... See more...
We want to limit the ingestion of data that is coming from some sources (in this case the value would be in Properties.HostName) because they basically are not working correctly (customer machines) and continue to spam the system. (Turning them off is not an option. ). I know that we can add hardcoded filters such as below: Name: Serilog:Filter:nn:Args:expression Value: @p['AssemlyName'] = 'SomeAssembly.xxx.yyy' and @p['HostName'] in ['Spammer1', 'Spammer2', ...] But the spammers change from time to time and we can generate their list.  The question is, if I have a list of these spammers (in any form needed) can I somehow use some sort of a value above of some other method to read from that list (in place of the "in [... ]" expression above)? 
Is there any way to authenticate DB Connect using key pair instead of user/password?  If not, any suggested workarounds anyone has found?
In Indexes.conf from the CM, I tried to set thawedHomePath to a volume, which I have since learned does not work. I set the path from volume:cold back to $SPLUNK_DB, but no matter what I do the inde... See more...
In Indexes.conf from the CM, I tried to set thawedHomePath to a volume, which I have since learned does not work. I set the path from volume:cold back to $SPLUNK_DB, but no matter what I do the indexer will not acknowledge that I changed it back. It still thinks it's set to the volume. I modified it, commented it  out, deleted the whole indexes.conf file and loaded a manual one in the `/etc/system/local/indxes.conf  and nothing will un-stick it. Every time I start the indexer, the logs show it won't start because thawedHomePath is mapped to a volume still. When I run ~\splunk btool indexes list --debug  it shows the thawedHomePath in question is configured correctly. Has anyone ever experienced this before? Any suggestions on how to get it to accept the change?  Running Splunk 9.2 on RHEL 8 with 1 CM and 2 IDXs clustered together. Fairly new deployment, still working the bugs out.
I've got this search index=my_index data_type=my_sourcetype earliest=-15m latest=now | eval domain_id=if(isnull(domain_id), "NULL_domain_id", domain_id) | eval domain_name=if(isnull(domain_name), "... See more...
I've got this search index=my_index data_type=my_sourcetype earliest=-15m latest=now | eval domain_id=if(isnull(domain_id), "NULL_domain_id", domain_id) | eval domain_name=if(isnull(domain_name), "NULL_domain_name", domain_name) | eval group=if(isnull(group), "NULL_Group", group) | eval non_tier_zero_principal=if(isnull(non_tier_zero_principal), "NULL_non_tier_zero_principal", non_tier_zero_principal) | eval path_id=if(isnull(path_id), "NULL_path_id", path_id) | eval path_title=if(isnull(path_title), "NULL_path_title", path_title) | eval principal=if(isnull(principal), "NULL_principal", principal) | eval tier_zero_principal=if(isnull(tier_zero_principal), "NULL_tier_zero_principal", tier_zero_principal) | eval user=if(isnull(user), "NULL_user", user) | eval key=sha512(domain_id.domain_name.group.non_tier_zero_principal.path_id.path_title.principal.tier_zero_principal.tier_zero_principal.user) | table domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principla, tier_zero_principal, user, key Due to the fact that we get repeating events where the only difference is the timestamp, I'm trying to put together a lookup that contains the sha512 key and that will allow an event to be skipped.  What I found is I can't have a blank value in the sha512 command.  Does anyone have a better way of doing this, then what I have? TIA, Joe
Using Splunk Add-on for Microsoft Windows, Splunk Add-on for Unix and Linux on Splunk Enterprise v9.3.0 What are the Linux (RHEL 8 ) equivalents for these Splunk Windows queries? e.g. Network Tra... See more...
Using Splunk Add-on for Microsoft Windows, Splunk Add-on for Unix and Linux on Splunk Enterprise v9.3.0 What are the Linux (RHEL 8 ) equivalents for these Splunk Windows queries? e.g. Network Traffic: Windows: index=wmi host=MyWindowsHost sourcetype="Perfmon:Network Interface" counter=Bytes* | timechart span=15m max(Value) as "Bytes/sec" by counter Linux: ? e.g. CPU:  Windows: index=wmi host=MyWindowsHost sourcetype="Perfmon:CPU Load" | timechart span=15m max(Value) as "CPU Load" by counter Linux: index=os host=MyLinuxHost source=cpu CPU="all" | timechart span=15m max(pctSystem),max(pctUser) by CPU
I need a help for writing a query to fetch logs in the system
Hi there! I'm looking for a comprehensive list of report ideas for all of security, including management/metrics, operations, and compliance. Has anyone created such a list? Would you mind sharing?... See more...
Hi there! I'm looking for a comprehensive list of report ideas for all of security, including management/metrics, operations, and compliance. Has anyone created such a list? Would you mind sharing? I'd like to see a long list or reports so I can help identify gaps in security posture. Thanks!!!
What is the best approach for data visualization using tstats? I am new to using tstats, I moved away from using the regular search index because it speeds up the query process. for example making... See more...
What is the best approach for data visualization using tstats? I am new to using tstats, I moved away from using the regular search index because it speeds up the query process. for example making this query to show the vulnerabilities found on each ip   | tstats summariesonly=t dc(Vulnerability.signature) as vulnerabilities from datamodel=Vulnerability by Vulnerability.dest | sort -vulnerabilities | rename Vulnerability.dest as ip_address | table ip_address vulnerabilities   for example, first line from that query show ip 192.168.1.5 has 4521 vulnerabilities found then I also created another detail table to verify and show some other columns related to that ip (click ip and send token) but it shows a different amount of data (4638 events).   | tstats summariesonly=t count FROM datamodel=Vulnerability WHERE Vulnerability.destination="192.168.1.5" AND Vulnerability.signature="*" BY Vulnerability.destination, Vulnerability.signature, Vulnerability.severity, Vulnerability.last_scan, Vulnerability.risk_score, Vulnerability.cve, Vulnerability.cvss_v3_score, Vulnerability.solution | `drop_dm_object_name(Vulnerability)` | rename destination as ip_address | fillnull value="Unknown" ip_address signature severity last_scan risk_score cve cvss_v3_score solution | table ip_address signature severity last_scan risk_score cve cvss_v3_score solution   and I know this is related to the inaccuracy of the query, because if Ichange the "BY" parameter it will change the amount of data displayed too. how to make the data count of this query match the same output as the first query, but still display other fields even though they are empty.
I've noticed a ton of "Unable to read in product version information" and "[HTTP 401] Client is not authenticated" errors lately in the splunk _internal logs. Has anyone else seen the same probl... See more...
I've noticed a ton of "Unable to read in product version information" and "[HTTP 401] Client is not authenticated" errors lately in the splunk _internal logs. Has anyone else seen the same problem? Is this something that should be ignored? Thanks
We are getting hundreds of these errors a day in the internal logs for orig_component="SearchOperator:rest" and for app="website_monitoring" Failed to fetch REST endpoint uri=https://127.0.0.1:80... See more...
We are getting hundreds of these errors a day in the internal logs for orig_component="SearchOperator:rest" and for app="website_monitoring" Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/data/inputs/web_ping?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API. I could not find anything pointing to that IP in our website_monitoring app. Could it be something configured to point to some local endpoint, is anyone else coming across this issue?   Thanks
Recently, I observed a message in Splunk Cloud (version 9.2.2403.105) stating, "Found an empty value in 'allowedDomainList' in alert_actions.conf." However, when I check the "Allowed Domain" setting ... See more...
Recently, I observed a message in Splunk Cloud (version 9.2.2403.105) stating, "Found an empty value in 'allowedDomainList' in alert_actions.conf." However, when I check the "Allowed Domain" setting in the UI by navigating to "Settings > Server settings > Email," it indicates "Leave empty for no restrictions." Despite this, I am still seeing the warning message.   #splunkcloud  #splunk
Hello Everyone ! I just in stalled Splunk ES trial on Ec2 and also tried on Digital Ocean instance. All goes well. But then I try to Sign -In after tpying creds it shows server error . Read multiple... See more...
Hello Everyone ! I just in stalled Splunk ES trial on Ec2 and also tried on Digital Ocean instance. All goes well. But then I try to Sign -In after tpying creds it shows server error . Read multiple discussions and threads tried applying som fix to web.conf but nothing works so far.  Grabbed some error logs from splunkf.log file and sharing here as well.  08-20-2024 15:26:55.179 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:55.379 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:55.579 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:55.779 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:55.979 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:56.183 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:56.383 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:56.583 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:56.783 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:56.983 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:57.183 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:57.383 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:57.583 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:57.783 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:57.987 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:58.187 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:58.387 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:58.587 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:58.787 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:58.987 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:59.187 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:59.387 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:59.587 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:59.791 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:26:59.991 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:00.191 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:00.395 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:00.595 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:00.795 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:00.999 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:01.199 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:01.399 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused 08-20-2024 15:27:01.599 +0000 WARN HttpClientRequest [55474 WebuiStartup] - Returning error HTTP/1.1 502 Error connecting: Connection refused
Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only be... See more...
Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only between last 30 days events and if it is required then allow the user to search for 90 days.  Is there any configuration available to make the data searchable and not searchable in splunk. Thanks in advance
we need a NAS logs integration to splunk but i dont know how to integrate .We have SC4s container. can anyone help on this
Hello community, we are currently a bit desperate because of a Splunk memory leak problem under Windows OS that most probably all of you will have, but may not have noticed yet, here is the history a... See more...
Hello community, we are currently a bit desperate because of a Splunk memory leak problem under Windows OS that most probably all of you will have, but may not have noticed yet, here is the history and analysis of it: The first time we observed a heavy memory leak problem on a Windows Server 2019 instance was after updating to Splunk Enterprise Version 9.1.3 (from 9.0.7). The Windows server affected has installed some Splunk apps (Symantec, ServiceNow, MS o365, DBconnect, Solarwinds), which are starting a lot of python scripts at very short intervals. After the update the server crashes every few hours due to low memory. Openend a Splunk case #3416998 in Feb 9th. With the MS sysinternals tool rammap.exe we found a lot "zombie" processes (PIDs no more listed in task manager) which are still using some KB of memory (~20-32 KB). Process names are btool.exe, python3.exe, splunk-optimiz, splunkd.exe. It seems every time a process of one of these programs ends, it leaves behind such a memory usage. The Splunk apps on our Windows server do this very often and fast which results in thousands of zombie processes.   After this insight we downgraded Splunk on the server to 9.0.7 and the problem disappears. Then on a test server we installed Splunk Enterprise versions 9.1.3 and 9.0.9. Both versions are showing the same issue. New Splunk case #3428922. In March 28th we got this information from Splunk: .... got an update from our internal dev team on this "In Windows, after upgrading Splunk enterprise to 9.1.3 or 9.2.0 consumes more memory usage. (memory and processes are not released)" internal ticket. They investigated the diag files and seems system memory usage is high, but only Splunk running. This issue comes from the mimalloc (memory allocator). This memory issue will be fixed in the 9.1.5 and 9.2.2 .......... 9.2.2 arrived at July 1st: Unfortunately, still the same issue, the memory leak persists. 3rd Splunk case #3518811 (which is still open). Also not fixed in Version 9.3.0. Even after a online session showing them the rammap.exe screen they wanted us to provide diags again and again from our (test) servers - but they should actually be able to reproduce it in their lab. The hudge problem is: because of existing vulnerabilities in the installed (affected) versions we need to update Splunk (Heavy Forwarders) on our Windows Servers, but cannot due to the memory leak issue. How to reproduce: - OS tested: Windows Server 2016, 2019, 2022, Windows 10 22H2 - Splunk Enterprise Versions tested: 9.0.9, 9.1.3, 9.2.2 (Universal Forwarder not tested) - let the default installation run for some hours (splunk service running) - download rammap.exe from https://learn.microsoft.com/en-us/sysinternals/downloads/rammap and start it - goto Processes tab, sort by Process column - look for btool.exe, python3.exe and splunkd.exe with a small total memory usage of about ~ 20-32 KB. PIDs of this processes don't exists in task list (see Task manager or tasklist.exe) - with the Splunk default installation (without any other apps) the memory usage slowly increases because the default apps script running interval isn't very high - stopping Splunk service releases memory usage (and zombie processes disappear in rammap.exe) - for faster results you can add an app for exessive testing with python3.exe, starting it in short (0 seconds) intervals. The test.py doesn't need to be exist! Splunk starts python3.exe anyway. Only inputs.conf file is needed: ... \etc\apps\pythonDummy\local\inputs.conf [script://$SPLUNK_HOME/etc/apps/pythonDummy/bin/test.py 0000] python.version = python3 interval = 0 [script://$SPLUNK_HOME/etc/apps/pythonDummy/bin/test.py 1111] python.version = python3 interval = 0 ...............if you want, add some more stanzas, 2222, 3333 and so on ............. - the more python script stanzas there are, the more and faster the zombies processes appears in rammap.exe Please share your experiences. And open tickets for Splunk support if you also see the problem, please. We hope Splunk finally react.  
Hello, I have a query used on Splunk enterprise web (search)- "index="__eit_ecio*" | ... | bin _time span=12h | ... | table ... | I am trying to put that into a python API code using Job clas... See more...
Hello, I have a query used on Splunk enterprise web (search)- "index="__eit_ecio*" | ... | bin _time span=12h | ... | table ... | I am trying to put that into a python API code using Job class as this - searchquery_oneshot ="<my above query>" I am getting error - "SyntaxError: invalid decimal literal" pointing to the 12h  in main query. How can I fix this? [2) Can I direct "collect" results (summary index) via this API into json format?] Thanks
Hi Team  Can you please help me to find a way to change the color of the output value in a single value visualization.  If COUNT_MSG is OK , then display OK in Green If COUNT_MSG is NOK , then d... See more...
Hi Team  Can you please help me to find a way to change the color of the output value in a single value visualization.  If COUNT_MSG is OK , then display OK in Green If COUNT_MSG is NOK , then display NOK in Red Current Code :  <panel> <title>SEMT FAILURES DASHBOARD</title> <single> <search> <query>(index="events_prod_gmh_gateway_esa") sourcetype="mq_PROD_GMH" Cr=S* (ID_FCT=SEMT_002 OR ID_FCT=SEMT_017 OR ID_FCT=SEMT_018 ) ID_FAMILLE!=T2S_ALLEGEMENT | eval ERROR_DESC= case(Cr == "S267", "T2S - Routing Code not related to the System Subscription." , Cr == "S254", "T2S - Transcodification of parties is incorrect." , Cr == "S255", "T2S - Transcodification of accounts are impossible.", Cr == "S288", "T2S - The Instructing party should be a payment bank.", Cr == "S299", "Structure du message incorrecte.",1=1,"NA") | stats count as COUNT_MSG | eval status = if(COUNT_MSG = 0 , "OK" , "NOK" ) | table status</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">all</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="useColors">1</option> </single> </panel>   Current Output:   
Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs... See more...
Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs from security devices like firewalls and load balancers  .   What is the recommended bandwidth between the intermediate forwarders and indexers? What is the recommended bandwidth between the UF agents and indexers?"
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  ins... See more...
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  installed and getting the logs.   Can someone please help me in creating a dashboard which shows all the details in a dashboard.  
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 N... See more...
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 Nat non-prod In our query, We use a lookup command to match enrich the data using this lookup table. we match by account_id and get the corresponding owner and type as follows.   | lookup accounts.csv account_id OUTPUT account_owner type     In some events (depending on the source) , the account_id values contains a preceding 0 . But in our lookup table, the account_id column does not have a preceding 0.    Basically some events will have account_id = 12345  and some might have account_id=012345. They both are same accounts though.  Now, The lookup command displays the results when there is an exact exact matching account_id in events,   but fails when there is that extra 0 at the beginning. How to tune the lookup command to make it search the lookup table for both the conditions - with and without preceding 0 for the account_id field and even if one matches, it should produce the corresponding results ? Hope i am clear. I am unable to come with a regex for this.