All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm using Splunk Enterprise 9.x  with Universal Forwarders 9.x on Windows 2019. All my forwarders are connected to a deployment server. I notice the following for example: I update a deployment ser... See more...
I'm using Splunk Enterprise 9.x  with Universal Forwarders 9.x on Windows 2019. All my forwarders are connected to a deployment server. I notice the following for example: I update a deployment server app (say update inputs.conf with a new input stanza) I restart the deployment server I view the inputs at the forwarder using btool and see that my changes have propagated However, even though the updated inputs.conf file seems to have landed at the forwarder I do not see the events defined by my new inputs.conf hitting the indexer until I restart the forwarder. Perhaps this is expected based on this When to restart Splunk Enterprise after a configuration file change - Splunk Documentation ? Is this expected and if so is there any way to restart the forwarder remotely using Splunk itself? 
I upgrade Splunk enterprise to 9.1.2 after doing the upgrde I see high CPU utization. Is anyone encounter simmilar issue after upgrading. Splunk running on window server.    
We are using Splunk 9 and are seeing a situation where a file gets re-ingested entirely each time the vendor product trims the older lines from the top of the file. The customer does not have any con... See more...
We are using Splunk 9 and are seeing a situation where a file gets re-ingested entirely each time the vendor product trims the older lines from the top of the file. The customer does not have any control over the how the vendor product does the file trimming. Splunk seems to lose track of its pointer and processes each line again even though they have been read previously. This is happening on a Windows client. Any ideas on how to handle this issue?
Hi I have been trying to deploy opentelemetry collector in my aws EKS cluster to send logs to Splunk enterprise, I have deployed this using SignalFx open telemetry collector helm chart: https://githu... See more...
Hi I have been trying to deploy opentelemetry collector in my aws EKS cluster to send logs to Splunk enterprise, I have deployed this using SignalFx open telemetry collector helm chart: https://github.com/signalfx/splunk-otel-collector But it seems to be an issue, what it is doing is not sending logs that have older timestamps than Otel pods it starts sending logs once new logs start ingesting any log files, old logs it is not ingesting, seems to be an issue with some configuration, want help in this 
Hi, I have the below SPL and I would like to get the comparison for 15 mints time span i.e if we run today at 5 am  then we should expect the table like for every 15 mints data count vs yesterday s... See more...
Hi, I have the below SPL and I would like to get the comparison for 15 mints time span i.e if we run today at 5 am  then we should expect the table like for every 15 mints data count vs yesterday same time count. Please could you help? Current SPL: basesearch earliest=-3d@d latest=now | eval date_wday=strftime(_time,"%A") |search NOT (date_wday=Saturday OR date_wday=Sunday) | eval last_weekday=strftime(now(),"%A") | eval previous_working_day=case(match(last_weekday,"Monday"),"Friday",match(last_weekday,"Tuesday"),"Monday",match(last_weekday,"Wednesday"),"Tuesday",match(last_weekday,"Thursday"),"Wednesday",match(last_weekday,"Friday"),"Thursday") | where date_wday=last_weekday OR date_wday=previous_working_day | eval DAY=if(date_wday=last_weekday,"TODAY","YESTERDAY") | chart count by Name,DAY | eval percentage_variance=abs(round(((YESTERDAY-TODAY)/YESTERDAY)*100,2)) | table Name TODAY YESTERDAY percentage_variance
I want to create an alert that notifies when Windows admins login and the accounts they are using. I want to ensure they are not using admin accounts for daily drivers. I want the search top produce ... See more...
I want to create an alert that notifies when Windows admins login and the accounts they are using. I want to ensure they are not using admin accounts for daily drivers. I want the search top produce a count of the logins and to which account they are utilizing. Can someone give me some direction on this please?
i have a timechart query which is giving me the below result  i want to exclude the columns with Zero like 02gdysjska2 ,2shbhsiskdf9 Not these names can change and or not fixed  _time 003hfhdf... See more...
i have a timechart query which is giving me the below result  i want to exclude the columns with Zero like 02gdysjska2 ,2shbhsiskdf9 Not these names can change and or not fixed  _time 003hfhdfs89huk 02gdysjska2 13hdgsgtsjwk 21dhsysbaisps 2shbhsiskdf9 5hsusbsosv 2024-01-23T09:45:00.000+0000 0 0 0 0 0 0 2024-01-23T09:50:00.000+0000 0 0 0 0 0 0 2024-01-23T09:55:00.000+0000 0 0 0 17961 0 0 2024-01-23T10:00:00.000+0000 0 0 1183 0 0 0 2024-01-23T10:05:00.000+0000 0 0 0 0 0 55 2024-01-23T10:10:00.000+0000 0 0 0 0 0 0 2024-01-23T10:15:00.000+0000 0 0 0 0 0 0 2024-01-23T10:20:00.000+0000 0 0 0 0 0 0 2024-01-23T10:25:00.000+0000 4280 0 0 0 0 0 2024-01-23T10:30:00.000+0000 0 0 0 0 0 0 2024-01-23T10:35:00.000+0000 0 0 0 0 0 0
Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my... See more...
Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my Splunk HF?  [root@hostname bin]# ./splencore.sh test Traceback (most recent call last): File "./estreamer/preflight.py", line 33, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 31, in <module> from estreamer.diagnostics import Diagnostics File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 43, in <module> import estreamer.pipeline File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 29, in <module> from estreamer.metadata import View ModuleNotFoundError: No module named 'estreamer.metadata'
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter... See more...
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter in the same way. Is there any way to achieve the same effect using a search query?        
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only... See more...
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only Count wise alerts. Instead of POD name which went down. Do we have any templates for this type of requirement?? https://www.bing.com/ck/a?!&&p=0eb6569b2b7936e0JmltdHM9MTcwNTg4MTYwMCZpZ3VpZD0zY2VjNWZlOS1lNDUzLTZkNDctMDVjOC00YmU2ZTVjMzZjMmEmaW5zaWQ9NTI1OA&ptn=3&ver=2&hsh=3&fclid=3cec5fe9-e453-6d47-05c8-4be6e5c36c2a&psq=pod+down+alert+appdynamics&u=a1aHR0cHM6Ly9jb21tdW5pdHkuYXBwZHluYW1pY3MuY29tL3Q1L0luZnJhc3RydWN0dXJlLVNlcnZlci1OZXR3b3JrL0luZGl2aWR1YWwtUG9kLVJlc3RhcnQtYWxlcnRzL3RkLXAvNTExMTk&ntb=1  
How to get peakstats and a count of success and errors for a month in one table?
Hi All, I am new to splunk clustering environment and i have few questions when i attend interview.Any one please help me on this question 1.Can we delete index folder ?will we have permission to d... See more...
Hi All, I am new to splunk clustering environment and i have few questions when i attend interview.Any one please help me on this question 1.Can we delete index folder ?will we have permission to delete the index folder Splunk\var\lib\splunk\TestDB. 2.Can we copy the Index folder and paste it in someother index folder will be able to search the logs? 3.Where can we install DB connect app and other apps in Search head cluster OR Indexer Server cluster? 4.What is the process name when we extract logs from props and transform.conf file? 5.Upgrade Splunk cluster enironment with simple steps? 6.What is the process of search head captain will do ? Thanks, Karthigeyan R
I have a splunk search that is returning the wrong results from a kvstore if the secondUID field is set to itself before doing the lookup. This is distilled from the actual search for simply showing ... See more...
I have a splunk search that is returning the wrong results from a kvstore if the secondUID field is set to itself before doing the lookup. This is distilled from the actual search for simply showing the bug. Both secondUID  and uID should be represented as strings.  Does anybody know why  | eval secondUID=secondUID causes the lookup command to return the wrong results? When it is commented out the correct results are returned. The results are consistently the same wrong results when they are wrong and the errors are event count dependent. So for instance, if I switch the head command on line 4 from 4000 results up to 10000 results, the lookup wrong result rate goes from 4.3% to 11.83% given the lines I am passing in for this example. If I pass in a different set of events, the results would still be wrong and consistently the same results wrong, but not necessarily the same % of wrong results compared to the other starting events.  If you either comment out that eval on line 8 or do | eval secondUID=tostring(secondUID) then the correct results are returned from the lookup command. If you switch tostring() with tonumber() the number of wrong lookups goes up.  I don't think this is intended functionality because | eval secondUID=secondUID should not be changing the results IMO, and the % of errors depend on how many events are passed through the search. More events = higher % of errors. The string comparison functions in the wheres also show nothing should be changing.  | inputlookup kvstore_560k_lines_long max=10000 | stats dc(uID) as uID by secondUID | where uID=1 | head 4000 ```keep 4000 results with the 1=1 uID to secondUID relationship established``` | eval secondUIDArchive=secondUID ```save the initial value ``` | where match(secondUIDArchive, secondUID) and like(secondUIDArchive, secondUID) ```initial value is unchanged``` | eval secondUID=secondUID ```this line causes the search to return different results compared to when commented out``` | where match(secondUIDArchive, secondUID) and like(secondUIDArchive, secondUID) ```string comparison methods show they are the same still``` | lookup kvstore_560k_lines_long secondUID output uID ```output the first UID again where there should be a 1=1 relationship``` | table uID secondUID secondUIDArchive | stats count by uID ```the final output counts of uID vary based on whether the eval on line 8 is commented out.```        
Has anyone tried this add-on to pull the tfs commits into Splunk via Azure DevOps (Git Activity) - Technical Add-On. I tried installing this app on one of the heavy forwarder but inputs section of th... See more...
Has anyone tried this add-on to pull the tfs commits into Splunk via Azure DevOps (Git Activity) - Technical Add-On. I tried installing this app on one of the heavy forwarder but inputs section of this add-on does not work.
I would like to predict memory ,cpu and storage usage of my splunk servers ( Indexers, search heads, )  step wise plan is to first do an analysis of current usage and then predict 6 months usage of ... See more...
I would like to predict memory ,cpu and storage usage of my splunk servers ( Indexers, search heads, )  step wise plan is to first do an analysis of current usage and then predict 6 months usage of my own splunk platform ( Like indexers , search  heads , heavy forwarders) 
Hello, I'm facing an issue with dashboards graphs. When checking the graphs from metric browser, all the data are showing fine. See below. But when we create an dashboard with same data, ... See more...
Hello, I'm facing an issue with dashboards graphs. When checking the graphs from metric browser, all the data are showing fine. See below. But when we create an dashboard with same data, we see some gaps. See below. Could someone have an idea why this is happening?
Hello community, how can I make a playbbok run every 5 minutes automatically?
I've been searching for awhile, but I haven't been able to find how to access an alert's description from within my add-on's alert action Python code. I'm using helper.get_events() to get the alert's... See more...
I've been searching for awhile, but I haven't been able to find how to access an alert's description from within my add-on's alert action Python code. I'm using helper.get_events() to get the alert's triggered events and helper.settings to get the title of the alert. Both are from https://docs.splunk.com/Documentation/AddonBuilder/4.1.4/UserGuide/PythonHelperFunctions. That documentation page doesn't seem to list any way to pull an alert's description though. Does anyone know where it's stored/how to access it?
I am working on a playbook where there is a need to copy the current event's artifacts  into a separate open and existing case.  We are looking for a way to automate this through phantom.collect +  p... See more...
I am working on a playbook where there is a need to copy the current event's artifacts  into a separate open and existing case.  We are looking for a way to automate this through phantom.collect +  phantom.add_artifact or other means. We have a way to pass in the existing case id  and need a solution to duplicate atrifacts from running event into that case specified by case id. 
When monitoring Windows systems which logs do you find to give the best information for finding security events and then tracking down the event from start to finish?