All Topics

Top

All Topics

Is it feasible to configure Splunk to authenticate with oracle databases using LDAP accounts?
Greetings I have a Heavy Fordwarder that constantly sends logs to the splunk cloud but I only receive the logs in the cloud at 09, 10 or 11 pm and then at 1 or 2 am the next day I get logs every 1 m... See more...
Greetings I have a Heavy Fordwarder that constantly sends logs to the splunk cloud but I only receive the logs in the cloud at 09, 10 or 11 pm and then at 1 or 2 am the next day I get logs every 1 minute. The source is a fortigate I have 4 nodes, 3 work perfectly and 1 is the one that is giving me problems. What could be happening?
I have a Splunk query that helps me to visualize different APIs vs Time as below. Using this query I could see each line graph for each APIs in the given time. index=sample_index |timechart span=1m... See more...
I have a Splunk query that helps me to visualize different APIs vs Time as below. Using this query I could see each line graph for each APIs in the given time. index=sample_index |timechart span=1m count by API   My actual requirement is to get the count by 2 fields (API and Consumer). ie I need a time graph for each API and Consumer combination. One graph for API1_Consumer1, one for API1_Consumer2, and one for API2_Consumer3 like that. How can I achieve that?
Seeing the following error when running powershell script both manually and as splunk shown below - anyone got any pointers please? All the scripts in the TA fail in this manner c:\Program Files\Sp... See more...
Seeing the following error when running powershell script both manually and as splunk shown below - anyone got any pointers please? All the scripts in the TA fail in this manner c:\Program Files\SplunkUniversalForwarder\bin>splunk cmd "c:\Program Files\SplunkUniversalForwarder\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd" v15 get-databasestats_2013.ps1 -PSConsoleFile : The term '-PSConsoleFile' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + -PSConsoleFile \bin\exshell.psc1 -command . 'c:\Program Files\SplunkU ... + ~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (-PSConsoleFile:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException  
Hi, I'm attempting to calculate the average of the last six CPU event values. If the average of those six events is greater than 95%, an alert must be sent. I basically tried the below query, but it... See more...
Hi, I'm attempting to calculate the average of the last six CPU event values. If the average of those six events is greater than 95%, an alert must be sent. I basically tried the below query, but it produced nothing. Can someone help? index=* sourcetype=cpu CPU=all host=* earliest=-35m | rename "%_Idle_Time" as Percent_Idle_Time | eval CpuUsage=coalesce(100-Percent_Idle_Time,100-PercentIdleTime) | streamstats count by host | where count<=6 | stats avg(values(CpuUsage)) as "Average of CpuUsage last 6 intervals(5mins range)" by host   Regards, Satheesh      
I am getting page not reachable after I finished installing splunk enterprise in a AWS virtual machine, I completed all the commands provided by this page. Can you help me ? Please  
Hi Guys, I am trying to learn Phantom app development using an on-prem phantom installation, and have come across really weird behavior with adding data to action_results. If I have some data I wan... See more...
Hi Guys, I am trying to learn Phantom app development using an on-prem phantom installation, and have come across really weird behavior with adding data to action_results. If I have some data I want to add, say: data = ["abc", "def", "ghi", "jkl"] it makes sense that I might want to do something like: for each d in data:     action_result.add_data(d) and expect to get an action result with 4 entries... instead what results is that I get an action result with 4 duplicates of the above data, effectively 16 entries: [["abc", "def", "ghi", "jkl"], ["abc", "def", "ghi", "jkl"], ["abc", "def", "ghi", "jkl"], ["abc", "def", "ghi", "jkl"]] Maybe this is intended behavior? To me this is weird, but since this is in my own app I just have to find ways to get around it. However, this behaviour also exists in all the other apps such as the splunk app. If I use the splunk app to make a search against my splunk instance say with the query index=test | head 6 then I would expect to get 6 results, however since the splunk app is also iterating over the results it recieves and uses the add_data method, the action results end up being 6 duplicate lists of 6 entries, so effectively 36 results. I am unable to parse this in any playbook blocks. If I write JUST custom code blocks then I can extract the desired results but then what is the point of playbooks if I am just writing everything in python code anyway. Also what if I expect my search to return 1000 results? Having the action result grow exponentially means that the action result will be 1,000,000 items which gets ridiculous. Is this expected behaviour? if so how do I get the results using the GUI playbook editor? Or is my Phantom instance borked somehow? (I ran the normal installer, haven't made any changes to my instance)
Hello, I have some issues with field extraction since value pair and non-value pair fields are within the same event. Not sure how implement Regex to extract these fields. A few sample events are g... See more...
Hello, I have some issues with field extraction since value pair and non-value pair fields are within the same event. Not sure how implement Regex to extract these fields. A few sample events are giving below. Value pair (with Underline) and non-value pair (in Bold and values separated by space) have been marked for one of the sample events. Any recommendation will be highly appreciated. Thank you. [2023-04-25 07:43:23,923] INFO  signin           2055ddf870d6un9d1  6567bfb signIn SUCCESS user:bn4bfb monitorId:2056dhf40d6b9d1 IPaddr:15.218.61.1 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.41" userDescription:"64b9ib" sessionType:STANDARD browser:Chrome(111) os:windows [2023-04-25 07:44:01,520] INFO  signin           009012cf0cce64c7  rmk9ddb signIn SUCCESS user:o0glddb monitorId:00amki2cf0cce6c7 IPaddr:15.198.2.35 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/101.0.1661.41" userDescription:"ugdi8db" sessionType:STANDARD browser:Chrome(111) os:windows [2023-04-25 07:45:13,632] INFO  signin           b9660cc3afe54c2  j56lb signIn SUCCESS user:j79lb monitorId:bop9060cc3afe54c2 IPaddr:10.209.23.194 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.41" userDescription:"jw908b" sessionType:STANDARD browser:Chrome(111) os:windows [2023-04-25 07:46:09,358] INFO  signin           0904c268c6b7e9d58  jw095lb signOut SUCCESS user:090wjlb monitorId:59c9098c6b7e9d5io [2023-04-25 07:46:47,077] INFO  signin           ee2bop9853a5623c  65co9b signIn SUCCESS user:6op0bb monitorId:ee2klo853a562op IPaddr:10.54.190.56 userAgent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.41" userDescription:"6op0bb" sessionType:STANDARD browser:Chrome(111) os:windows  
Hi All, Is there a way to import a panel into dashbaord studio, in a similar way to simple xml. Simple XML: This would import/insert that panel into the dashboard, so I could manage all filters... See more...
Hi All, Is there a way to import a panel into dashbaord studio, in a similar way to simple xml. Simple XML: This would import/insert that panel into the dashboard, so I could manage all filters/inputs in a global application and not have to manage it within the dashboard. <row> <panel id="overview_filter" ref="overview_filter" app="Filters"></panel> </row> Why: Imagine you have n dashboards, all which use the same set of inputs/token. Instead of writing the inputs/token code n times (once per dashboard), you an write it once and import it with the panel and panel-id.  The benefit here, apart from less coding, is that when ever anyone wants to change the name or title of a token, or they want to add a new token - you only have to do it in 1 place and it altomatically goes into all dashboards. I could not find the same way to import code/panels into dashboard studio.  I am sure it exists - my searches just couldnt find how to do it. cheers -brett  
Need help in creating splunk query to show value of fields as Zero having null values and for numeric it should show exact count.   For example -  I want to search for all the events if all fiel... See more...
Need help in creating splunk query to show value of fields as Zero having null values and for numeric it should show exact count.   For example -  I want to search for all the events if all fields having specific keywords I am searching. And for the others, if that keyword is not available in that field value , then it should as 0 count
Hello all!   I am attempting to dynamically add 'Next Steps' to a notable event based off a lookup table in my Correlation Search Splunk Query. I was wondering if it is possible to do this using Va... See more...
Hello all!   I am attempting to dynamically add 'Next Steps' to a notable event based off a lookup table in my Correlation Search Splunk Query. I was wondering if it is possible to do this using Variable Substitution?    For example if my notable name is X, then populate the 'Description' and 'Next Steps' columns with the associated  fields in the lookup table.   If this is not possible at the moment, can anyone suggest another way that I could get this data to dynamically populate?   Thanks!
I'm wanting to avoid using saved searches and lookup tables as much if possible so it's easily maintainable by anyone on the team. Also, I'm wanting to make it as future proof as possible so it "just... See more...
I'm wanting to avoid using saved searches and lookup tables as much if possible so it's easily maintainable by anyone on the team. Also, I'm wanting to make it as future proof as possible so it "just works" with little need to update or modify. My end goal is to create a query that produces a True/False (or equivalent) result for each value when compared to the max value of the same field. To explain in more detail: I'm wanting the query to use the latest version of the Trellix/McAfee Agent reported in Splunk and then compare that value against the full set and return True/False if the numbers match. I can get exactly what I need using the query below, but it needs to be manually updated every time the Agent version is updated.   source=trellix AgentVer=* | eval AgentStatus=if(AgentVer=="5.7.9.182", "True","False") | stats count BY AgentStatus   Simple Where this gets complicated is when I try to isolate the latest version. I've tried all kinds of ways to extract that version number and put it into its own field and then do the comparison and nothing I've tried works.  Here's an example of what I have tried, but this is not exhaustive because I've tried 500 different ways...   <!-- This query produces the version I need into a new field --> source=trellix AgentVer=* | stats max(AgentVer) AS TAV <!-- Then I try to compare the value in the new TAV field to the old field --> source=trellix AgentVer=* | stats max(AgentVer) AS TAV | eval Status=if(AgentVer==TAV, "True","False") | table Status <!-- No good --> <!-- So then I try to take it a step further --> source=trellix AgentVer=* | stats max(AgentVer) AS TAV | rex field=TAV (?<TA>"^(?:^\d+(\.\d+)+$)") | eval Status=if(AgentVer==TA, "True","False") | table Status <!-- No good --> <!-- Ok, maybe a subsearch will work --> source=trellix AgentVer=* [search source=trellix AgentVer=* | stats max(AgentVer=*) AS TA | table TA] | eval Status=if(AgentVer=TAV, "True","False") | table Status <!-- No good -->   Again, the above are just examples of what I've tried. I've tried replacing | stats max(AgentVer) with | eval TA=max(AgentVer), I've tried chart instead of stats, and etc. I've even tried to just duplicate the field and use the duplicate instead of the original and still no luck. I've not found anything that can do what I'm trying to do. I hope it's possible but maybe I'm reaching here.   Does the community have any recommendations for how to solve this? Thank you ahead of time!
July 2023 Special .conf23 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a special edition of indexEducation dedicated to all-things .conf23. indexEdu... See more...
July 2023 Special .conf23 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a special edition of indexEducation dedicated to all-things .conf23. indexEducation is the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification, and technical training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go  Training You Gotta Take The Developer Track | Validate Your Mad Skills Before It’s Too Late If you went to .conf23, you now know first-hand how cool the Splunk Certification Team is and that we offer more than a dozen certifications so you can deepen your knowledge and grow your career potential. Unfortunately, one of these exams – Splunk Certified Developer Certification – is being taken out of the rotation on September 30, 2023. So, if you want to become a Splunk Certified Developer and build some killer apps with the Splunk web framework, the clock is ticking. Get your training on by following the Developer Track and reviewing the exam study guide. If you currently hold the certification/badge, it will remain valid until its current expiration date – but you may want to consider recertifying before it’s gone to extend the validity of your certification for another three years.  Gotta  Follow the Track  | Grow Your Badge Collection Blue Team Academy | Training for Cybersecurity Expertise Also showcased at .conf23 was our new Blue Team Academy training – perfect for all you defenders of the universe out there! Our user conference may be over, but now the fun really begins. The Splunk Certified Cybersecurity Defense Analyst (CDA) certification exam is now open to the public in beta – for FREE. So, look over the study materials, take the exam, and show the world you're a Splunk Certified Cybersecurity Defense Analyst. We’ll give you a badge to prove it too!  Gotta Be a Defender | Get Your CDA Cert Today Things You Needa Know Big News On-Demand | Get All the .conf23 Goodness Sometimes, our Splunk User Conference can be almost *too* exciting. So, if you’re one of those “rather-be-hanging-home-with-the-cat” types, we got you and we get you. Watch the keynote sessions, hear about the product announcements, read the hot press releases, and check out the vibe right from your comfy couch. We’ve got customers, we’ve got demos, we’ve got show-and-tell, and so many other experiences that will jump-start your imagination about what’s possible with your skills, your knowledge, and Splunk.   Needa Experience It | Watch from Home Curiosity is the Seed | Free eLearning Courses as Hot as Vegas It was 117 degrees at .conf23 in Las Vegas, Nevada, but that didn’t stop conference-goers from getting curious about what’s hot with Splunk Education. Splunk users from all over the globe hit the show floor to learn more about how to unlock the potential of data to turn it into actionable insights. If you spent all your extra dollars in those pretty machines in the casino, get to know our catalog of free Splunk Education courses. Plus, if you’re an aspiring Blue Team Academy defender, we’ve recently added two more free courses – “The Cybersecurity Landscape” and “Security Operations and the Defense Analyst” – just for you.  Gotta Start Free | Hot Cybersecurity Courses  Places You’ll Wanna Go Splunk University | During .conf24 This is a special edition dedicated to .conf23, so we’d be remiss if we didn’t try to entice you to put your Splunk Education caps (or fezzes) on and head to .conf24 next year. Our annual user conference will take place June 10-13, 2024, at The Venetian in Las Vegas, Nevada. School will be in session the weekend prior with Splunk University. This is your opportunity to attend bootcamps, connect with a global community of passionate data experts, and explore tons of educational sessions. Um, we won’t mention the poolside cocktails or the excitement of Las Vegas cuz that just wouldn’t be fair to your decision-making.   Wanna Go to Vegas | Stay In-the-Know STEP | The Online Place to Start Your Learning Journey If you haven’t already met, we’d like to introduce you to the Splunk Training and Enablement Platform (STEP). STEP is the bright new place where all learners can now access Splunk technical training. Whether you’re in the market for flexible self-paced eLearning, easy-to-enroll-in instructor-led training, or the latest Splunk Certification exams, STEP is your first stop for registration and enrollment. At Splunk, we believe that everyone, everywhere should have access to technical learning opportunities so they can grow their careers and be the ‘good guys’ who help their organizations stay ahead of the ‘bad guys.’ Needa Get Upskilled | Take the first STEP Find Your Way | Learning Bits and Breadcrumbs   Go Get Punny Ts | Redeem Learning Rewards for Hot Summer Swag  Go Find STEP Answers | STEP FAQs Go Watch On-Demand Tech Talks | Deep-Dives for Technical Practitioners Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Share | Subscribe to the Newsletter   Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.  Answer to Index This: .conf – Also known as the biggest learning event of the year.   
I'm using a timechart visualization as a drilldown on a dashboard where the time range is controlled by radio buttons with options of: -24h@h, -7d@d, -30d@d, -60d@d, -90d@d, -6mon@d, and -1y@d.  ...... See more...
I'm using a timechart visualization as a drilldown on a dashboard where the time range is controlled by radio buttons with options of: -24h@h, -7d@d, -30d@d, -60d@d, -90d@d, -6mon@d, and -1y@d.  ...... | timechart count by "Site Name" Most everything works fine but when I switch select -6mon@d or -1y@d the timechart no longer displays the events with their actual date and instead labels all of them as the first of the month (i.e. July 1, 2023).    I imagine this is something to do with timechart's automatic grouping based on time range but is there a way to disable this and have the events displayed with their actual date?   Not only is it important for analysis purposes, I have a drilldown of this timechart that shows the specific event data but my search is dependent on the timechart returning the specific date. See search below: ........ | eval dtg=strftime($dd_earliest$, "%d %b %Y") | where Start=dtg AND 'Site Name'="$selected_site$" These values are set in the drilldown stanzas of the search: | timechart count by "Site Name" <set token="selected_site">$click.name2$</set> <eval token="dd_earliest">$click.value$</eval>
I have a savedsearch running on a 5 minute cron schedule iteratively working through a list of previously saved search parameters. 2 Things (1) Can I have a conditional CRON schedule such that I so... See more...
I have a savedsearch running on a 5 minute cron schedule iteratively working through a list of previously saved search parameters. 2 Things (1) Can I have a conditional CRON schedule such that I somehow detect when work needs to be performed and if so, enable the CRON? The processing for a day may take 6 hours, but the CRON keeps running and burning resources. (2) Some of the savedsearches run in < 1 min but others take longer than 5 minutes. Instead of using a CRON schedule, can I detect the savedsearch ID, detect when it has completed and then initiate the subsequent execution of the savedsearch on the next batch of data?  
  Hey, I am using addon builder version 4.1.3 and I have so many addons in that suddenly Addon builder home page is displaying blank I have checked collection.conf in addon_builder_app under lo... See more...
  Hey, I am using addon builder version 4.1.3 and I have so many addons in that suddenly Addon builder home page is displaying blank I have checked collection.conf in addon_builder_app under local, I checked all addon stanzas with compare to /etc/apps but it didn't work can anyone find a solution of this one  
We want event to separated for each header whenever there is new entry in the csv file. what would be the props applied to the sourcetype to have a single event  sample file   want details... See more...
We want event to separated for each header whenever there is new entry in the csv file. what would be the props applied to the sourcetype to have a single event  sample file   want details in one event whenever there is header inserted in csv file please suggest
Hello, I have an index with a field that record how long a computer has been running. Basically, when I display the information of a computer on 2 days I get this : I would like to get the max ... See more...
Hello, I have an index with a field that record how long a computer has been running. Basically, when I display the information of a computer on 2 days I get this : I would like to get the max value before each 'shutdown',  where the value reset to 0 after. Any simple way I could do that ?
I have created an  Alert when the response time is high and service is down and scheduled it on Cron job which is set to every 2 minutes. So it is notifying me when the Server is down. But I also ... See more...
I have created an  Alert when the response time is high and service is down and scheduled it on Cron job which is set to every 2 minutes. So it is notifying me when the Server is down. But I also want to create an Alert which will show the recovery Alert means when the service is Up again the alert should trigger only one time after the down. I have created this Up alert also by putting the condition low response time but it is triggering every 2 minutes and sending an email. Which is actually not required. So I just need only one email notification after down that service is Up again and running normally.    
Hi All, We are trying to upgrade the splunk universal forwarder from version 8.1.0 to 9.0.3 using ansible scripts. But we are getting error when the script tries to start the forwarder. Herewith att... See more...
Hi All, We are trying to upgrade the splunk universal forwarder from version 8.1.0 to 9.0.3 using ansible scripts. But we are getting error when the script tries to start the forwarder. Herewith attached error and ansible playbook. Ansible playbook: - name: Splunk Upgrade | Copy tgz to target copy: src: /pub/splunk/splunkpackages/{{ splunk_package }} dest: /tmp/{{ splunk_package }} - name: Splunk Upgrade | Check for SYSV scripts stat: path: /etc/rc.d/init.d/splunk register: splunk_sysv - name: Splunk Upgrade | Stop Splunk shell: | {{ splunk_home }}/bin/splunk stop tar -cvf /opt/splunk_config_backup.tar {{ splunk_home }}/etc/ - name: Splunk Upgrade | Clean up SYSV scripts shell: | rm /etc/rc.d/init.d/splunk /opt/splunkforwarder/bin/splunk disable boot-start when: splunk_sysv.stat.exists ignore_errors: yes - name: Splunk Upgrade | Upgrade Forwarder and restart ==> in this task it getting failed shell: | cd /opt tar -xzvf /tmp/{{ splunk_package }} chown -R splunk:splunk /opt/splunkforwarder {{ splunk_home }}/bin/splunk start --accept-license --answer-yes --no-prompt register: splunk_upgrade - name: Splunk Upgrade | Convert SYSV to Systemd shell: | {{ splunk_home }}/bin/splunk stop chown -R splunk:splunk /opt/splunkforwarder /opt/splunkforwarder/bin/splunk enable boot-start -user splunk when: splunk_sysv.stat.exists - name: Splunk Upgrade | start and enable splunk service: name: SplunkForwarder.service enabled: true state: started - name: Splunk Upgrade | Cleanup tgz file: state: absent path: /tmp/{{ splunk_package }}   Error in splunk forwarder log: