All Topics

Top

All Topics

I want to create an alert that notifies when Windows admins login and the accounts they are using. I want to ensure they are not using admin accounts for daily drivers. I want the search top produce ... See more...
I want to create an alert that notifies when Windows admins login and the accounts they are using. I want to ensure they are not using admin accounts for daily drivers. I want the search top produce a count of the logins and to which account they are utilizing. Can someone give me some direction on this please?
i have a timechart query which is giving me the below result  i want to exclude the columns with Zero like 02gdysjska2 ,2shbhsiskdf9 Not these names can change and or not fixed  _time 003hfhdf... See more...
i have a timechart query which is giving me the below result  i want to exclude the columns with Zero like 02gdysjska2 ,2shbhsiskdf9 Not these names can change and or not fixed  _time 003hfhdfs89huk 02gdysjska2 13hdgsgtsjwk 21dhsysbaisps 2shbhsiskdf9 5hsusbsosv 2024-01-23T09:45:00.000+0000 0 0 0 0 0 0 2024-01-23T09:50:00.000+0000 0 0 0 0 0 0 2024-01-23T09:55:00.000+0000 0 0 0 17961 0 0 2024-01-23T10:00:00.000+0000 0 0 1183 0 0 0 2024-01-23T10:05:00.000+0000 0 0 0 0 0 55 2024-01-23T10:10:00.000+0000 0 0 0 0 0 0 2024-01-23T10:15:00.000+0000 0 0 0 0 0 0 2024-01-23T10:20:00.000+0000 0 0 0 0 0 0 2024-01-23T10:25:00.000+0000 4280 0 0 0 0 0 2024-01-23T10:30:00.000+0000 0 0 0 0 0 0 2024-01-23T10:35:00.000+0000 0 0 0 0 0 0
Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my... See more...
Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my Splunk HF?  [root@hostname bin]# ./splencore.sh test Traceback (most recent call last): File "./estreamer/preflight.py", line 33, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 31, in <module> from estreamer.diagnostics import Diagnostics File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 43, in <module> import estreamer.pipeline File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 29, in <module> from estreamer.metadata import View ModuleNotFoundError: No module named 'estreamer.metadata'
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter... See more...
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter in the same way. Is there any way to achieve the same effect using a search query?        
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only... See more...
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only Count wise alerts. Instead of POD name which went down. Do we have any templates for this type of requirement?? https://www.bing.com/ck/a?!&&p=0eb6569b2b7936e0JmltdHM9MTcwNTg4MTYwMCZpZ3VpZD0zY2VjNWZlOS1lNDUzLTZkNDctMDVjOC00YmU2ZTVjMzZjMmEmaW5zaWQ9NTI1OA&ptn=3&ver=2&hsh=3&fclid=3cec5fe9-e453-6d47-05c8-4be6e5c36c2a&psq=pod+down+alert+appdynamics&u=a1aHR0cHM6Ly9jb21tdW5pdHkuYXBwZHluYW1pY3MuY29tL3Q1L0luZnJhc3RydWN0dXJlLVNlcnZlci1OZXR3b3JrL0luZGl2aWR1YWwtUG9kLVJlc3RhcnQtYWxlcnRzL3RkLXAvNTExMTk&ntb=1  
How to get peakstats and a count of success and errors for a month in one table?
Hi All, I am new to splunk clustering environment and i have few questions when i attend interview.Any one please help me on this question 1.Can we delete index folder ?will we have permission to d... See more...
Hi All, I am new to splunk clustering environment and i have few questions when i attend interview.Any one please help me on this question 1.Can we delete index folder ?will we have permission to delete the index folder Splunk\var\lib\splunk\TestDB. 2.Can we copy the Index folder and paste it in someother index folder will be able to search the logs? 3.Where can we install DB connect app and other apps in Search head cluster OR Indexer Server cluster? 4.What is the process name when we extract logs from props and transform.conf file? 5.Upgrade Splunk cluster enironment with simple steps? 6.What is the process of search head captain will do ? Thanks, Karthigeyan R
I have a splunk search that is returning the wrong results from a kvstore if the secondUID field is set to itself before doing the lookup. This is distilled from the actual search for simply showing ... See more...
I have a splunk search that is returning the wrong results from a kvstore if the secondUID field is set to itself before doing the lookup. This is distilled from the actual search for simply showing the bug. Both secondUID  and uID should be represented as strings.  Does anybody know why  | eval secondUID=secondUID causes the lookup command to return the wrong results? When it is commented out the correct results are returned. The results are consistently the same wrong results when they are wrong and the errors are event count dependent. So for instance, if I switch the head command on line 4 from 4000 results up to 10000 results, the lookup wrong result rate goes from 4.3% to 11.83% given the lines I am passing in for this example. If I pass in a different set of events, the results would still be wrong and consistently the same results wrong, but not necessarily the same % of wrong results compared to the other starting events.  If you either comment out that eval on line 8 or do | eval secondUID=tostring(secondUID) then the correct results are returned from the lookup command. If you switch tostring() with tonumber() the number of wrong lookups goes up.  I don't think this is intended functionality because | eval secondUID=secondUID should not be changing the results IMO, and the % of errors depend on how many events are passed through the search. More events = higher % of errors. The string comparison functions in the wheres also show nothing should be changing.  | inputlookup kvstore_560k_lines_long max=10000 | stats dc(uID) as uID by secondUID | where uID=1 | head 4000 ```keep 4000 results with the 1=1 uID to secondUID relationship established``` | eval secondUIDArchive=secondUID ```save the initial value ``` | where match(secondUIDArchive, secondUID) and like(secondUIDArchive, secondUID) ```initial value is unchanged``` | eval secondUID=secondUID ```this line causes the search to return different results compared to when commented out``` | where match(secondUIDArchive, secondUID) and like(secondUIDArchive, secondUID) ```string comparison methods show they are the same still``` | lookup kvstore_560k_lines_long secondUID output uID ```output the first UID again where there should be a 1=1 relationship``` | table uID secondUID secondUIDArchive | stats count by uID ```the final output counts of uID vary based on whether the eval on line 8 is commented out.```        
Has anyone tried this add-on to pull the tfs commits into Splunk via Azure DevOps (Git Activity) - Technical Add-On. I tried installing this app on one of the heavy forwarder but inputs section of th... See more...
Has anyone tried this add-on to pull the tfs commits into Splunk via Azure DevOps (Git Activity) - Technical Add-On. I tried installing this app on one of the heavy forwarder but inputs section of this add-on does not work.
I would like to predict memory ,cpu and storage usage of my splunk servers ( Indexers, search heads, )  step wise plan is to first do an analysis of current usage and then predict 6 months usage of ... See more...
I would like to predict memory ,cpu and storage usage of my splunk servers ( Indexers, search heads, )  step wise plan is to first do an analysis of current usage and then predict 6 months usage of my own splunk platform ( Like indexers , search  heads , heavy forwarders) 
Hello, I'm facing an issue with dashboards graphs. When checking the graphs from metric browser, all the data are showing fine. See below. But when we create an dashboard with same data, ... See more...
Hello, I'm facing an issue with dashboards graphs. When checking the graphs from metric browser, all the data are showing fine. See below. But when we create an dashboard with same data, we see some gaps. See below. Could someone have an idea why this is happening?
Hello community, how can I make a playbbok run every 5 minutes automatically?
Today, we welcome the voice of Sophie Mills to share her leadership perspective on Splunk blogs. Sophie, who is the Director of Global Education Ecosystem Strategy and Development at Splunk, is respo... See more...
Today, we welcome the voice of Sophie Mills to share her leadership perspective on Splunk blogs. Sophie, who is the Director of Global Education Ecosystem Strategy and Development at Splunk, is responsible for the Splunk Authorized Learning Partner (ALP) program designed to expand the global reach of Splunk's education ecosystem. Read Sophie’s blog to find out more about Splunk ALPs and how they help Splunk Education to scale learning to capture a more global reach with local languages and timezones.      The Splunk Authorized Learning Partner Program goes beyond mere education; it's focused on empowering both individuals and organizations worldwide. These partners are crucial in closing the skills gap, offering top-notch, localized, and tailored Splunk training. In our journey through the constantly changing digital world, we remain dedicated to broadening our worldwide program. This expansion aims to foster learner achievement and assist organizations in addressing the increasing need for experts in data-centric positions.     Explore more educational opportunities and deepen your understanding of Splunk through our Authorized Learning Partners. Read Sophie’s blog and discover more about how you can learn in your own region, timezone, and language here.   -- Callie Skokos on behalf of the Splunk Education Crew
I've been searching for awhile, but I haven't been able to find how to access an alert's description from within my add-on's alert action Python code. I'm using helper.get_events() to get the alert's... See more...
I've been searching for awhile, but I haven't been able to find how to access an alert's description from within my add-on's alert action Python code. I'm using helper.get_events() to get the alert's triggered events and helper.settings to get the title of the alert. Both are from https://docs.splunk.com/Documentation/AddonBuilder/4.1.4/UserGuide/PythonHelperFunctions. That documentation page doesn't seem to list any way to pull an alert's description though. Does anyone know where it's stored/how to access it?
I am working on a playbook where there is a need to copy the current event's artifacts  into a separate open and existing case.  We are looking for a way to automate this through phantom.collect +  p... See more...
I am working on a playbook where there is a need to copy the current event's artifacts  into a separate open and existing case.  We are looking for a way to automate this through phantom.collect +  phantom.add_artifact or other means. We have a way to pass in the existing case id  and need a solution to duplicate atrifacts from running event into that case specified by case id. 
When monitoring Windows systems which logs do you find to give the best information for finding security events and then tracking down the event from start to finish?
I am trying to convert a dashboard from Simple XML to Dashboard Studio. In the original dashboard there is a token that uses "$click.name2$ that links to the corresponding name of the field in anothe... See more...
I am trying to convert a dashboard from Simple XML to Dashboard Studio. In the original dashboard there is a token that uses "$click.name2$ that links to the corresponding name of the field in another dashboard. To my understanding, the equivalent of $click.name2$ in XML should be "$name" in Dashboard Studio; however, when I use "$name" the correct value is not returning. What would be the equivalent of "$click.name2" in Dashboard Studio? This is for a single value.
In 2023, we were routinely reminded that the digital world is ever-evolving and susceptible to new disruptions. At Splunk, it's our mission to help our customers use our products more successfully to... See more...
In 2023, we were routinely reminded that the digital world is ever-evolving and susceptible to new disruptions. At Splunk, it's our mission to help our customers use our products more successfully to build greater digital resilience. So to kick off the new year, we are excited to offer new onboarding toolkits and learning tracks to help you master our products! What are the onboarding toolkits?  The Onboarding Toolkits for Platform, Security, and Observability have been thoughtfully designed to introduce you to the Splunk resource ecosystem, as well as offer a roadmap for how best to start using your product(s). These toolkits highlight three stages for successful onboarding: setting a strong foundation, adding your first use cases, and then maximizing value from your product(s).  .                     Security Onboarding Toolkit      Observability Onboarding Toolkit      Platform Onboarding Toolkit Curated learning tracks  Within these toolkits, we’ve included new learning tracks, organized by both role and difficulty. Our education team has put together some amazing courses to help you deepen expertise and deliver results, including a ton of free options. Whether you’re a developer, security pro, or Splunk administrator, these tracks can help you find courses that are best suited to your needs.  .            .           Learning tracks for Security        Learning tracks for Observability      Learning tracks for Platform  Cheers, Arif Virani, Splunk Customer Success
I need to look for an incoming email and if an email matches a certain subject, I need to check another source type to see if within an hour of that email coming through there was a hit on that sourc... See more...
I need to look for an incoming email and if an email matches a certain subject, I need to check another source type to see if within an hour of that email coming through there was a hit on that sourcetype. 
Hi All, I am almost a starter in Splunk but my org uses this tool as a log management utility. I need help in getting a direction so as to how to filter data from logs in a distributed a sync loggi... See more...
Hi All, I am almost a starter in Splunk but my org uses this tool as a log management utility. I need help in getting a direction so as to how to filter data from logs in a distributed a sync logging product. Problem statement: There are multiple log files on multiple Linux boxes getting generated every second. I need to search for ids created and relevant creation timestamps and the batches under which these ids exists. Filter the ids based on passed batches (this is another line in the same log file) Calculate the E2E timestamp for the id processing by searching the processed id in step-1 and substracting the timestamp of step 3 and step 1(this is again printed in the log files). I have been doing this using oracle external tables and Linux shell but need to do it in a better way using Splunk and need help,  opinion's are highly appreciated