All Topics

Top

All Topics

Hello, two quick questions regarding the Splunk Add-on for JBoss and the Splunk Add-on for JMX: The documentation says that both TA's require Oracle JDK or openJDK. Given the documentation, I'm... See more...
Hello, two quick questions regarding the Splunk Add-on for JBoss and the Splunk Add-on for JMX: The documentation says that both TA's require Oracle JDK or openJDK. Given the documentation, I'm assuming that there is no possibility to run the TA's with another JDK distribution such as JDK variants of Azul (Zulu Prime aka Zing or Zulu) - could someone here confirm this? If it's indeed not possible, is it sufficient to only have the openJDK on the Heavy Forwarder? I would assume yes since the doc says: "Install Java Runtime 1.7 or later on the same machine as the Splunk Add-on for JBoss. Note: You need to use the OpenJDK Java Runtime or Oracle Java Runtime."
Hello i am getting this below error on the linux server and can't find the nmon performance data on the server, can some one please help? how to rectify on it? 12-11-2023 08:39:48.203 +0100 INFO  l... See more...
Hello i am getting this below error on the linux server and can't find the nmon performance data on the server, can some one please help? how to rectify on it? 12-11-2023 08:39:48.203 +0100 INFO  loader [8843 MainThread] - SPLUNK_MODULE_PATH environment variable not found - defaulting to /splunk/splunkforwarder/etc/modules
i am currently working on splunk fraud analytics app, is there anyone who worked on this app and can support me and suggest some good sources
Hello Experts , Is there any option in AppDynamics (SaaS)where i can configure to get the daily / Weekly/Monthly  license usage report by email  . Thanks
Hello, i am deploying the ESCU searches in our environment. However, the endpoint logs are not ingested in Splunk. However for deploying the usecases, I ingested the Windws Security Logs with win eve... See more...
Hello, i am deploying the ESCU searches in our environment. However, the endpoint logs are not ingested in Splunk. However for deploying the usecases, I ingested the Windws Security Logs with win event 4688/4689 to monitor the usecases. Sysmon logs are not ingested well. The Windows logs, configured with endpoint model are triggering the notables. Is it triggered notables relevant with the incident triage?
Can somebody list an SAP EWM standard reports that are used today in S/4HANA? I know that in S/4HANA in tcode /SCM/MON you can monitor the warehouse but I can't find the list with all the reports.
I am installing Python for Scientific Computing AddOn application but there is an error like this :    Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\7514f26da... See more...
I am installing Python for Scientific Computing AddOn application but there is an error like this :    Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\df9ffe7f1b8aef48: The system cannot find tthe path specified.   what should I do to solve this problem?  
If I Import into an existing lookup.csv will the current contents be overwritten?
Can you provide me with step-by-step instructions on how to ingest Cayosoft Administrator logs into Splunk? 
Greetings Splunk's! My use case is quite straightforward: We aim to save and monitor (secondarily) some rare hashtags from the Twitter platform (unsure if it's possible on Facebook and other platfo... See more...
Greetings Splunk's! My use case is quite straightforward: We aim to save and monitor (secondarily) some rare hashtags from the Twitter platform (unsure if it's possible on Facebook and other platforms). We would like hashtags related posts to be chronologically arranged from the beginning to end, viewable in HTML format, and exportable to Excel (at minimum).  Additionally, if possible: 2. It would be ideal to see statistics on which accounts/dates used the hashtags the most. Further possibilities include analyzing post comments, likes, and the followers/following of the posters—if feasible, in the future. We've downloaded the following software for our Linux platform: https://download.splunk.com/products/splunk/releases/9.1.2/linux/splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb As our programming skills are non-existent (but we can follow instructions well), we wonder if it's possible to set up this workflow and what possibilities exist related to it all. We find it challenging to comprehend the possibilities available. Is it simple for non-programmers to follow, or does it require a robust infrastructure or extra investment for certain use cases (perhaps payment to Twitter)? Some user cases might be achievable through multiple other easy methods? We'd like the workflow to be more of a "set and forget" with the ability to easily change the hashtag/search word in the future if possible. We are currently in a 14/60 day trial (unsure if it's 60 or 14 days and what is included in that) and would like to know the future cost after the free usage ends. While our use case is culturally important. As hastags are not so big, it likely generates a small amount of results. However, we'd like to understand the potential costs if the hashtag were to trend, resulting in thousands of daily results (e.g., 5% with images). What would be the approximate cost for saving 5000+ results per day/month? What is the limits here? Is Splunk the right tool for this kind of work? Are there pre-made templates or samples/posts to guide this type of work? AI directed us to this software when we sought help with saving hashtags. Please feel free to provide thorough explanations for any part of our questions and include links to samples. We are comfortable copying and pasting and editing scripts but cannot follow programming logic ourselves. Are there any requirements for the Twitter API? Such as a separate account login to avoid posing a threat to our main account (as it seems X does not allow browsing the site without logging in anymore). If there are sample user cases related to my needs and what is possible, I would also like to see screens or videos about it Thank you for all the help and patience. splunk@arvutiministeerum.ee - Extra information can be sent to our email as well.  // Is 'Knowledge Management' the correct spot for that post? Feel free to transfer it if there is a more appropriate sub-forum. // Margus
I have a csv file with the user list and I want to create an alert to monitor the user login failure alert from the user list. How do I use the lookup file, can you please let me know?
I got some data when I started up the application, but not since then. There are about 60 users, so I should be seeing something. Any ideas on what I can do to get it to collect data? https://imgur... See more...
I got some data when I started up the application, but not since then. There are about 60 users, so I should be seeing something. Any ideas on what I can do to get it to collect data? https://imgur.com/a/S2s4Cgr It's been running for probably a couple hours now.
Hi Guys, I was planning to sit in exam for Certified Enterprise Admin but on last minute realised that must do power user certification first. I have been studying on diff platforms but wanted to se... See more...
Hi Guys, I was planning to sit in exam for Certified Enterprise Admin but on last minute realised that must do power user certification first. I have been studying on diff platforms but wanted to see if Splunk offers self-paced full certification courses or not, like Splunk core power user and Splunk enterprise system admin. What ever I see on Splunk website for courses is not simple and straight forward to choose , it seems so much complicated to find the right full course (paid or free) Please can someone show me a simple direct link to start with rather moving around on site . Regards, Azim
I have below requirement Log info: 09.00PM Xyz event received for customernumber:1234 Log info: 09.05 PM abc event received for customernumber:1234 Loginfo :09.10PM pqr event received for customer... See more...
I have below requirement Log info: 09.00PM Xyz event received for customernumber:1234 Log info: 09.05 PM abc event received for customernumber:1234 Loginfo :09.10PM pqr event received for customernumber:1234   Like that n number of customer number is there in the splunk log   I want to check ,whether all 3 event has been received for customernumber or not    For that I tried using AND operation    Like "xyz received for customernumber" AND "abc event received for customernumber" AND "PQR event received for customernumber|rex (I tried retrieve customernumber)   Which is not working .pls help with exact query 
Hey, consider a scenario where you want to create a reusable input playbook that takes advantage  of the condition blocks such as Filter&Decision.  For example, an input playbook that receives an i... See more...
Hey, consider a scenario where you want to create a reusable input playbook that takes advantage  of the condition blocks such as Filter&Decision.  For example, an input playbook that receives an ip_hostname, then queries AD over LDAP to check whether the ip_hostname is in a specific OU. that would be easily achievable using Filter/Decision normally, but since its in an input playbook, I haven't seen any output parameters that u can then use as in a main playbook to find out whether the condition was true or false.  Thanks in advance
Hi at all, I have a problem similar to one already solved by @PickleRick   in a previous question: I have a flow from a concentrator (another one that the previous) that sends many logs to my HF by... See more...
Hi at all, I have a problem similar to one already solved by @PickleRick   in a previous question: I have a flow from a concentrator (another one that the previous) that sends many logs to my HF by syslog. It's a mixture or many kinds of logs (Juniper, Infoblox, Fortinet, etc...). I have to override the sourcetype value, assigning the correct one, but the issue is that, after the related add-ons has to do a new override of the sourcetype value and this second override doesn't work. I'd like to have an hint about the two solutions I thought or a different one: if it's possibe, how to make a double sourcetype overriding? do you think that it's better to modify the Add-Ons avoiding the first override? Thank you for your help. Ciao. Giuseppe
Hello team, I have distrubuted environment and I got some data with syslog.  We  are create some regex for field extraction on captain SH not on Deployer (possibly this part should be search time fi... See more...
Hello team, I have distrubuted environment and I got some data with syslog.  We  are create some regex for field extraction on captain SH not on Deployer (possibly this part should be search time field extraction) and everything works but I dont get it. I know captain SH replace the conf file for other SH. İs it process automatic or not? I cant see same config on deployer, I think its normal because we dont have app for this logsource just we use custom parse. İf I had a app, I can use deployer but in this case wondering custom process. Somebody can explain for me these process? why we didnt use Deployment server for HF parsing?(maybe its other way I'm not clear)  
Hi Team/Community, I'm having an issue with a lookup file. I have a csv with two columns, 1st is named ioc and second is named note. This csv is an intel file created for searching for any visits to... See more...
Hi Team/Community, I'm having an issue with a lookup file. I have a csv with two columns, 1st is named ioc and second is named note. This csv is an intel file created for searching for any visits to malicious urls for users. The total number of lines for this csv is 66,317. The encoding for this csv is ascii. We keep having an issue with this search where when running it Splunk will give an error message of Regex: regular expression is too large. This error isn't consistent and it this particular alert doesn't  seem to trigger much for many of our customers. We also have a similar separate alerts built for looking for domains and IPs. Those work much better than this url csv. I would like help in seeing with the issue is with the regex being too large. Is the number of lines causing a major issue with this file running properly? Any help would be greatly appreciated as I think this search is not running efficiently.
Hi,  We are seeing the sudden spike of the license consumption in our splunk es since last week, Where do we get to see the all indexes license consumption  daily wise,, what is the cause of this ... See more...
Hi,  We are seeing the sudden spike of the license consumption in our splunk es since last week, Where do we get to see the all indexes license consumption  daily wise,, what is the cause of this sudden splunk ?
Hello, I am working on a search to find domains queried via a particular host, and list out a count of hits per unique domain on the host, along with the username. This search is returning the doma... See more...
Hello, I am working on a search to find domains queried via a particular host, and list out a count of hits per unique domain on the host, along with the username. This search is returning the domains individually, but they are showing up as an entry in each "count" (see the Actual Results below).  What I am looking to do is get the results to show only the values for the highest count of each domain, and to order these results from highest to lowest (see Expected Results below).  index=foo Host=<variable> | streamstats count(query) as Domains by User query Workstation | eval combo=Domains +" : "+ query | stats values(combo) as "Unique Hits : Domain" by User Workstation | sort - combo Actual Results (truncated): 1 : www.youtube.com 2 : history.google.com 3 : history.google.com Expected Results (truncated): 3 : history.google.com 2 : mail.google.com 1 : www.youtube.com