All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  @gcusello , I am curious to know why I am able to see  HTTP Event collector under the Data Inputs on my Indexer where there is no HTTP Event collector on Search Head. Indexer   Search Hea... See more...
Hi  @gcusello , I am curious to know why I am able to see  HTTP Event collector under the Data Inputs on my Indexer where there is no HTTP Event collector on Search Head. Indexer   Search Head Regards, Rahul Gupta  
This serach result will always return 3 rows. I want display all row but in trellis.  For the first row, it is the memory utilization for CIC-1 For the second row, it is the memory utilization for ... See more...
This serach result will always return 3 rows. I want display all row but in trellis.  For the first row, it is the memory utilization for CIC-1 For the second row, it is the memory utilization for CIC-2 For the third row, it is the memory utilization for CIC-3 How can I do the trellis to display based on rows? Do I need to add new column "Name" and insert CIC-1, CIC-2, CIC-3 to respective rows?    
Our application's log-entries are in JSON and I need to search for certain strings found in the field called message. I have no problem finding them with a regular search: ... AND (message="Appli... See more...
Our application's log-entries are in JSON and I need to search for certain strings found in the field called message. I have no problem finding them with a regular search: ... AND (message="Application is closing." OR message="successfully started") However, when I try to define a transaction with the seemingly same search criteria: ... | transaction source startsWith="message=\"Application is closing.\"" endsWith="message=\"successfully started\"" I get zero results... Am I escaping the quotes incorrectly or making some other syntax error?
Hello The server cannot be accessed directly. Can I set up a delpoyment-client on a remote forwarder?   The 8089 port is open.
Need help with a solution for errors I get saying "unrecoverable in the server.....Python 3.x.... " when downloading 60,000-100,000 search results on the ES please. Thx in advance
Is there a need for keeping the _internal index logs past a certain time period? My _internaldb is pretty large at 218GB total, db - 31, cold - 112, frozen - 75. You can see my current settings below... See more...
Is there a need for keeping the _internal index logs past a certain time period? My _internaldb is pretty large at 218GB total, db - 31, cold - 112, frozen - 75. You can see my current settings below. We have about 140 forwarders reporting to this indexer. Should I just remove the path to frozen and let them get deleted? Does anyone ever thaw internal logs? If so, what for? [_internal] homePath = $SPLUNK_DB\_internaldb\db coldPath = $SPLUNK_DB\_internaldb\colddb thawedPath = $SPLUNK_DB\_internaldb\thaweddb coldToFrozenDir = $SPLUNK_DB\_internaldb\frozendb frozenTimePeriodInSecs = 5184000 tstatsHomePath = volume:_splunk_summaries\_internaldb\datamodel_summary maxConcurrentOptimizes = 6 maxWarmDBCount = 60 maxHotSpanSecs = 86400 maxHotBuckets = 8 maxDataSize = auto
I am looking for a great Alert manager Add-on for ES. To ingest MS Azure AD Alerts data into ES. There are 2 of them called Azure Sentinel add-on for Splunk & Alert Manager Add-on on Splunkbase.com b... See more...
I am looking for a great Alert manager Add-on for ES. To ingest MS Azure AD Alerts data into ES. There are 2 of them called Azure Sentinel add-on for Splunk & Alert Manager Add-on on Splunkbase.com but it says with 0 installs for both. Has any champs here used one that is good for my needs? Thank u in advance.
I have two tables EmailX Doc DateChecked Name a@a.com Doc 1 1/1/2021 a a@a.com Doc 2 1/15/2021 a a@a.com Doc 3 1/30/2021 b   EmailY DateLogin a@a.com 12/10/2022 ... See more...
I have two tables EmailX Doc DateChecked Name a@a.com Doc 1 1/1/2021 a a@a.com Doc 2 1/15/2021 a a@a.com Doc 3 1/30/2021 b   EmailY DateLogin a@a.com 12/10/2022 a@a.com 11/10/2022 a@a.com 1/15/2021 a@a.com 1/25/2021   I want to join them on Emailx & EmailY and then in result  for each Email  i need to get most recent DateLogin that is before DateChecked.  I am hoping to no have to use joins as my second table has more than 50k records.  So the results should be like this EmailX Doc DateChecked Name RecentDateLogin a@a.com Doc 1 1/1/2021 a - a@a.com Doc 2 1/15/2021 a 1/15/2021 a@a.com Doc 3 1/30/2021 b 1/25/2021   So if I have to write a sql, it would be something like below. I haven't tested below , but you get the idea. SELECT t1.EmailX, t1.Doc, t1.DateChecked, t1.Name, max(t2.DateLogin) as RecentDateLogin FROM table1 AS t1 LEFT JOIN table 2 AS t2 ON t1.EmailX = t2.EmailY AND t1.DateChecked>t2.DateLogin Group By t1.EmailX, t1.Doc, t1.DateChecked, t1.Name   Thanks.  
We have a foo.csv which will be updated regularly, and we have searches which require some of the data in foo.csv to run properly. I would like to solve this using a macro in the searches, but am hav... See more...
We have a foo.csv which will be updated regularly, and we have searches which require some of the data in foo.csv to run properly. I would like to solve this using a macro in the searches, but am having difficulties. foo.csv   field1,field2,field3 bar11,bar21,bar31 bar12,bar22,bar32 bar13,bar23,bar33     I need "bar11","bar12","bar13" to be inserted to a search, like so:   | pivot fooDM barData min(blah) AS min_blah filter field1 in ("bar11","bar12","bar13")     So I created a macro which (when run alone in a search) gives a quoted comma separated list, myMacro:   [| inputlookup foo.csv | strcat "\"" field1 "\"" field1 | stats values(field1) AS field1 | eval search=mvjoin(field1, ",") | fields search]   The above macro I've attempted both "Use eval-based definition" and not, and place it in search like this:   | pivot fooDM barData min(blah) AS min_blah filter field1 in (`myMacro`)     I would love any help. Thank you!  
I've got a report that is run on a schedule every five minutes. I would like the "latest" to be set to the most recent increment of 5 minutes. This solution used to work but no longer appears to. Doe... See more...
I've got a report that is run on a schedule every five minutes. I would like the "latest" to be set to the most recent increment of 5 minutes. This solution used to work but no longer appears to. Does anyone have any thoughts for how to achieve this? I cannot simply rely on latest=now() because the report certainly will not always run exactly at the correct time. So, I need to be able to snap to the latest 5 minutes so that my counts do not get improperly calculated. Edit: Here is my base search. I'm trying to get latest to snap to the most recent five minute increment. It's not returning any results. index=_internal source=*license_usage.log* type=Usage earliest=-0d@d ([makeresults | eval latest=(floor(now()/300))*300 | fields latest]) However, if I do something like this is does return results. I don't want this ... I was just testing to see if the syntax was messed up or something. The above base search is what I want because it snaps latest to the most recent five minute increment of the hour. index=_internal source=*license_usage.log* type=Usage earliest=-0d@d ([makeresults | eval latest=relative_time(now(), "-m") | fields latest])  Why does relative_time(now(), "-m") work but (floor(now()/300))*300 doesn't?
We have an Enterprise Splunk instantiation that has clustered virtual indexers.  We have been advised that we need real hardware for our indexers to scale up to the size we anticipate.  What areas of... See more...
We have an Enterprise Splunk instantiation that has clustered virtual indexers.  We have been advised that we need real hardware for our indexers to scale up to the size we anticipate.  What areas of performance are affected by having virtualized indexers versus hardware?  
Hi, I have found several locations with a props.conf in my Docker splunk:8.2 image:   ./opt/splunk/etc/apps/legacy/default/props.conf ./opt/splunk/etc/apps/search/local/props.conf ./opt/splunk/etc... See more...
Hi, I have found several locations with a props.conf in my Docker splunk:8.2 image:   ./opt/splunk/etc/apps/legacy/default/props.conf ./opt/splunk/etc/apps/search/local/props.conf ./opt/splunk/etc/apps/search/default/props.conf ./opt/splunk/etc/apps/splunk_internal_metrics/default/props.conf ./opt/splunk/etc/apps/splunk_monitoring_console/default/props.conf ./opt/splunk/etc/apps/sample_app/default/props.conf ./opt/splunk/etc/apps/SplunkLightForwarder/default/props.conf ./opt/splunk/etc/apps/splunk_archiver/default/props.conf ./opt/splunk/etc/apps/splunk_secure_gateway/default/props.conf ./opt/splunk/etc/apps/splunk_rapid_diag/default/props.conf ./opt/splunk/etc/apps/splunk_instrumentation/default/props.conf ./opt/splunk/etc/apps/learned/local/props.conf ./opt/splunk/etc/system/default/props.conf     I noticed, when I add a sourcetype in splunk Enterprise web interface (Settings -> sourcetypes) they will be saved in two locations: apps/search/local/props.conf apps/search/metadata/local.meta I was just wondering, if any of these two would be right location to copy a manually configured props.conf file, or if I should rather add it to /opt/splunk/etc/system/default/props.conf instead? Thanks
We are facing "4xx Client error" intermittently when executing jobs in synthetic hosted agents. When checking the script output, we can validate that the code is running successfully. We can able to ... See more...
We are facing "4xx Client error" intermittently when executing jobs in synthetic hosted agents. When checking the script output, we can validate that the code is running successfully. We can able to see the expected webpage screenshot captured by AppDynamics. Need your help to fix the issue.
Hi,   I would like to know to the commands and procedures for failures happen for splunk 1. What if deployment server failed and where to check the status and command to check through CLI? 2.What... See more...
Hi,   I would like to know to the commands and procedures for failures happen for splunk 1. What if deployment server failed and where to check the status and command to check through CLI? 2.What if cluster master fails and commands to check 3.Where can we check for troubleshooting concepts?   I have done reasearch but nowhere find out the proper solution.please help me out with the topics.      
Hello, This article, https://research.splunk.com/stories/log4shell_cve-2021-44228/ , lists many log4j attack vectors and how Splunk can help detect them. This includes what datamodels to implement/u... See more...
Hello, This article, https://research.splunk.com/stories/log4shell_cve-2021-44228/ , lists many log4j attack vectors and how Splunk can help detect them. This includes what datamodels to implement/use and the SPL. However, the SPL includes various macros. And these macros do not exist on my Splunk implementation. Where do I find these macros? Thanks and God bless, Genesius
Hello experts, I have recently onboarded around 300 windows devices. I have followed the onboarding guide and getting the logs ingested as required but for one field i.e. sourcetype. The source and... See more...
Hello experts, I have recently onboarded around 300 windows devices. I have followed the onboarding guide and getting the logs ingested as required but for one field i.e. sourcetype. The source and sourcetype is updated as below source = WinEventLog:System sourcetype = wineventlog Can someone please help in identifying the issue. Thanks
I have a dashboard with 65 panels (all are parsers). It is taking a while to load. I am having 4 base searches to use across the 65 panels, still I am facing some lag in loading the entire dashboard.... See more...
I have a dashboard with 65 panels (all are parsers). It is taking a while to load. I am having 4 base searches to use across the 65 panels, still I am facing some lag in loading the entire dashboard.  For a better viewability, I categorized the panels based on their utility and showing them in tabs using CSS. For for performance perspective, I dont find any other option other then using Base Search, but it isn't helping me either.  Please provide effective ideas for improving the performance without recuding the number of panels involved.
Hi Team,   We have dashboard setup which has button, on clicking that button it try to execute the function of a python script. we are passing the data through the ajax call. before we did the sp... See more...
Hi Team,   We have dashboard setup which has button, on clicking that button it try to execute the function of a python script. we are passing the data through the ajax call. before we did the splunk 8 upgrade, it was working fine however after the upgrade we are getting the following error :-   error:321 - Masking the original 404 message: 'The path 'xxxx' was not found.' with 'Page not found!' for security reasons in splunk.   i tried searching for this error code but couldn't find much, can you please help   
I have two questions. 1.Is it possible to Stack and unstack in a single column chart? in the below chart the line on top of each bar is the total per stacked column, I want to have the total column... See more...
I have two questions. 1.Is it possible to Stack and unstack in a single column chart? in the below chart the line on top of each bar is the total per stacked column, I want to have the total column first and then the stacked (split-up of total) next. Problem: Since i am not able to do the same i had to add total as overlay  2. How can i show in tooltip  value of a column apart form the value chart shows by default in tooltip      Lets assume i have TotalParts and TotalPartsRunTime, if i plot chart by TotalPartsRunTime then i can see the label TotalPartsRunTime: value for each column/stacked column in tooltip. Along with that i also wanted to show TotalParts: value Problem: When i add TotalParts in result then it is stacked as part of the already stacked column and creates a separate legend for the same, what i wanted to do is just show the TotalParts count in tooltip e.g scenario Application: ABC val_2_B is the total time taken to process val_4: is the total count of val_2_B items that was processed  [expected to show in tooltip and same should not be plotted in chart] Please let me know if i am not clear | makeresults | eval application="FSD", val_1="A", val_2=4839, val_3=5000, val_4=1000 | append [| makeresults | eval application="ABC", val_1="B", val_2=1000, val_3=3215,val_4=2000] | append [| makeresults | eval application="ABC", val_1="E", val_2=478, val_3=4328,val_4=3000] | table application val_1 val_2 val_3 val_4 | sort application | streamstats count by application | eventstats list(val_1) as val_1 by application | foreach val_* [| eval name="copy_<<FIELD>> ".mvindex(val_1,count-1) | eval {name}=<<FIELD>>] | stats values(copy_*) as * by application | fields - val_1*
Hi Team, We are collecting data from Alibaba cloud through a heavy forwarder (using Alibaba add-ons) and pushing the data to our splunk cloud. But what we are seeing is its collecting all data from... See more...
Hi Team, We are collecting data from Alibaba cloud through a heavy forwarder (using Alibaba add-ons) and pushing the data to our splunk cloud. But what we are seeing is its collecting all data from the Alibaba cloud which is huge in size, and upon validating it we realized that below events are making 80% of the whole events and it is not required to us. So we want to exclude below events (rule_result=pass and status=200) from being collected. We know this can be done by editing Props.conf File, but we have been trying it from long for it but we are not successful. Can someone please advise us how to edit this Props.conf file and get these below events (rule_result=pass and status=200) excluded from the heavy forwarder.   index= alibaba source="alibaba:cloudfirewall" rule_result=pass index=alibaba source="alibaba:waf" status=200