All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello How can I expose alerts using the API ? i've created a saved searches. thanks
Within splunk cloud, I suspect we are whitelisting a list of approved snmp servers. I need to "whitelist" a new snmp server so it doesn't generate alerts. What would this list be called? Datasets? W... See more...
Within splunk cloud, I suspect we are whitelisting a list of approved snmp servers. I need to "whitelist" a new snmp server so it doesn't generate alerts. What would this list be called? Datasets? Where would I find this list so I can add another entry to it?
Hi all, I want to customize my Dashboards in such a way that it use only colors defined by me and not the default ones. This color series should be defined globally and not defined on each pan... See more...
Hi all, I want to customize my Dashboards in such a way that it use only colors defined by me and not the default ones. This color series should be defined globally and not defined on each panels. (We should only call/href some link or token ) Is there a way in splunk to customize color series for all the panels in a global level using Simple XML in splunk? Is this possible using HTML or CSS?
Hi Team, I want to Monitor my Chrome Process count and Memory utilized by individuals chrome process. Below is the Process Monitoring extension configured. @Nina.Wolinsky @Aditya.Jagtiani @Bhuvnesh... See more...
Hi Team, I want to Monitor my Chrome Process count and Memory utilized by individuals chrome process. Below is the Process Monitoring extension configured. @Nina.Wolinsky @Aditya.Jagtiani @Bhuvnesh.Kumar @Claudia.Landivar  metricPrefix: "Server|Component:Test1|Custom Metrics|Process Monitor|" # metricPrefix: "Custom Metrics|Process Monitor|" # displayName: required - Metrics to be reported under this name in Controller's Metric Browser # regex/pid/pidFile - process is fetched using this field instances: - displayName: "machine agent" regex: ".*java.exe -jar machineagent.jar" - displayName: "Chrome Monitoring" regex: ".* chrome.exe" # Not necessary to modify linux: process: "ps -eo pid,%cpu=CPU%,%mem=Memory%,rsz=RSS,args" solaris: process: "ps -eo pid,pcpu=CPU%, -o pmem=Memory%, -o rss=RSS -o args" aix: process: "ps -eo pid,pcpu=CPU%,pmem=Memory%,rss=RSS,args" metrics: - CPU%: multiplier: 1 - Memory%: alias: "Memory%" - RSS: alias: "Resident Set Size" - Running Instances: alias: "Running Instances" # number of concurrent tasks numberOfThreads: 2 # This is to run this in scheduled mode. In this case, the extension will fetch the data every 300 seconds # and caches the data. The cached data will be reported to controller every min. This way there is no metric drop. # Can be used when the data rarely changes or to reduce the load while fetching metrics every minute. #taskSchedule: # numberOfThreads: 1 # taskDelaySeconds: 300
I've been searching for a way to pull a file from Splunk universal forwarder installed host, but couldn't find anything useful. What I need is, after my specific alert is triggered, I need to pul... See more...
I've been searching for a way to pull a file from Splunk universal forwarder installed host, but couldn't find anything useful. What I need is, after my specific alert is triggered, I need to pull a file from that host that triggered the alarm. I created 1-2 custom alert actions so I'm familiar with that stuff simply. Maybe running some python codes on the host can help me to upload that file to my server but I'm not sure with that. Is there any other stuff that helps me with these problems? Thanks in advance.
Hello everyone, i am using Splunk Enterprise Security but at the moment because I don't have enough logs (only from Suricata) I use only ES's "Incident Review" to track notable events and create i... See more...
Hello everyone, i am using Splunk Enterprise Security but at the moment because I don't have enough logs (only from Suricata) I use only ES's "Incident Review" to track notable events and create investigations. This is quite handy while waiting for new logs to be input and use 100% of Enterprise Security app. Since I only use 5% of its capabilites I would like to "kill" most of resource consuming functions from ES. Any ideas what shall I deactivate (eg. accelerated searches, apps like threat-Intel since I am offline etc)? Thanks a lot, Chris
Context: Windows 10 Pro Splunk 7.3.4 What? Trying to upgrade to Splunk 8.0.1. Half way during the upgrade the installer would error and rollback. Errors: 1. The code execution cannot proc... See more...
Context: Windows 10 Pro Splunk 7.3.4 What? Trying to upgrade to Splunk 8.0.1. Half way during the upgrade the installer would error and rollback. Errors: 1. The code execution cannot proceed because SSLEAY32.DLL was not found. 2. The code execution cannot proceed because LIBEAY32.DLL was not found.
I don't find anywhere accurate information about what TailReader is.
Hello all, I'm hoping that someone with a bit of experience with integrating/installing Splunk apps and the common information model will be able to help me with a problem I'm having. I need to mo... See more...
Hello all, I'm hoping that someone with a bit of experience with integrating/installing Splunk apps and the common information model will be able to help me with a problem I'm having. I need to monitor DNS activity, and store this data in my Splunk enterprise instance in a CIM-compliant format. I've got plenty of experience with driving Splunk, building analytics and reports, dashboards etc, but little in the way of the underlying engineering aspects, data pipeline, formatting etc (though I have built regex-based field extractions in the past). CIM compliance is a requirement in order to integrate another tool that's going to go on top of Splunk. I'm working under some limitations in what I can do to get the data in. Therefore, I've had the DNS server configured to produce debug logging into a plaintext file, and I've deployed a Universal Forwarder to monitor this file. I've also got the Splunk add-on for DNS installed, as my understanding is this will give me the CIM-compliant field extractions and parsing of the log that I need. I've got a single-instance Splunk Enterprise v8 server built, and I've been able to verify that the data is coming in (albeit with warnings about missing indexes, that I think will be resolved once I've configured the receiving a bit better). My understanding is that I also need to install the DNS add-on into the indexer, and this is where it gets murky. I believe that the DNS add-on is superseded by the add-on for Microsoft Windows. I've opted to use the DNS add-on, however, as I saw references in other questions that the new add-on isn't actually CIM compliant. However, the DNS add-on isn't compatible with Splunk 8. Having had a look through the add-ons, I can see the same extractions and content the DNS add-on had in the Microsoft add-on, so I think I can put the Windows add-on onto the indexer, and still get all the DNS information/extractions I need from the DNS add-on installed on the forwarder. However, I can't see the WIndows add-on in the splunk apps store - it's just missing. I'm aiming now to try a manual install, but this seems a fairly straightforward usecase, so I'm asking whether anyone has had experience doing this, and can guide me in the right direction. Is what I'm doing sensible? Will it do what I need it to? And, if I cann't get an add-on onto the indexer, can I not just copy the regex from the add-on config files and extract my own fields, with the appropriate CIM names? Any insight, experience or suggestions would be greatly appreciated. Right now, I'm trying to hack this into working, and I'd be far happier if I knew I was at least heading in the sane direction.
Hello Forum I heard that Splunk together with OSquery can detect processes running without a binary on disk. Can this OSquery be installed on the Splunkt-server in order to be able to query ... See more...
Hello Forum I heard that Splunk together with OSquery can detect processes running without a binary on disk. Can this OSquery be installed on the Splunkt-server in order to be able to query the clients to be monitored or is a local installation on every client required? Can it be customized to alarm automatically when this case happend (processes running without a binary on disk)? Thank you! Bill
A custom web application produces logs in the tomcat format like this: 2020-01-31 18:19:02,091 DEBUG [com.vendor.make.services.ServiceName] (pool-7-thread-44) - <Short Form: time elapsed 120, pau... See more...
A custom web application produces logs in the tomcat format like this: 2020-01-31 18:19:02,091 DEBUG [com.vendor.make.services.ServiceName] (pool-7-thread-44) - <Short Form: time elapsed 120, pause interval 360, workflows to start 0> Potentially a super long message from one to 400 lines, < 50K characters, often JSON The events always begin with a newline and a timestamp, always in the same format (above). ... yet Splunk breaks up long events (I've seen events with 3K characters broken up, and more) - and so far it looks like they are all JSONs being logged.: 1/31/20 6:21:02.419 PM "preroll_start-eVar32" : "live" }, "feed:relateds" : [ ] }, { "id" : asset_id, Show all 9 lines host = hostname source = /custom_app/tomcat/logs/custom_app.log sourcetype = tomcat:custom_app It also seems to do so consistently, and always in the same place regardless of the length of the event, right between these two lines: "preroll_start-eVar29" : "feed_app|sec_us|||asset_id|video|200131_feedl_headlines_3pm_video", "preroll_start-eVar32" : "live" Any idea what trips it, or what can be done to force Splunk to keep these events together? Thanks! Incidentals: TRUNCATE = 0 in the props.conf (+ splunk apply cluster-bundle ) on the CM seem to make no difference. /opt/splunk/etc/master-apps/_cluster/local/props.conf on the CM: [tomcat:custom_app] TRUNCATE = 0 EXTRACT-.... = .... SEDCMD-scrub_passwords = s/STRING1_PASS=([^\s]+)/STRING1_PASS=#####/g
We have a python script that basically does "ip address -> ... python-generated splunk calls + viz api calls -> url of a cool generated interactive viz". Is there a recommended way to make the co... See more...
We have a python script that basically does "ip address -> ... python-generated splunk calls + viz api calls -> url of a cool generated interactive viz". Is there a recommended way to make the cool interactive viz viewable at the end of a phantom playbook alongside the rest of our results? Basically a fancier map. We may want views that work at the level of 1 IP (e.g., the first), or of all. How would that work? We see the phantom custom apps support the calls get context and custom template, suggesting get context should generate the viz url, and the custom template can be a django html fragment that redirects to the url. At the same time, we were under the impression that Phantom views are heavily sanitized. So we're unsure if the above strategy is the recommended approach.
When looking at the result of a Phantom automation, say on IP1 & IP2 + User1 & User2, we'd like to also have a table that looks something like: IP REPORTS: Click for 360 on IP1: http://someapp/ip3... See more...
When looking at the result of a Phantom automation, say on IP1 & IP2 + User1 & User2, we'd like to also have a table that looks something like: IP REPORTS: Click for 360 on IP1: http://someapp/ip360/ip=ip1 Click for 360 on IP2: http://someapp/ip360/ip=ip2 USER REPORTS: Click for 360 on User1: http://someapp/user360/user=user1 Click for 360 on User2: http://someapp/user360/user=user1 This is just some webapp, not a phantom app, and we don't want to make an app. One Splunk representative recommended calling an HTTP App such as through an echo/postman service and having it return JSON, like "postman?ip=ip1" => "{ip: ip1}". Our intuition is that there may be a simpler approach such as a Python code block that enriches the event with a link, or some other builtin..
Trying to send all events with this contained to nullqueue: Host Application = C:\WINDOWS\system32\WindowsPowerShell\v1.0\PowerShell.exe -NoLogo -Noninteractive -NoProfile -ExecutionPolicy Bypass ... See more...
Trying to send all events with this contained to nullqueue: Host Application = C:\WINDOWS\system32\WindowsPowerShell\v1.0\PowerShell.exe -NoLogo -Noninteractive -NoProfile -ExecutionPolicy Bypass & 'C:\WINDOWS\CCM\SystemTemp\02160779-88dd-4e84-96cc-7a1313cd97d9.ps1' Here are my props & transforms entries: [WinEventLog:Splunk-PowerShell] TRANSFORMS-suppress-ps = suppress-powershell [suppress-powershell] REGEX=^Host\s?Application\s?=\s?C:\\WINDOWS\\system32\\WindowsPowerShell\\v1\.0\\PowerShell\.exe\s-NoLogo\s-Noninteractive\s-NoProfile\s-ExecutionPolicy\sBypass\s&\s'C:\\WINDOWS\\CCM\\SystemTemp\\[\d\w]{8}-[\d\w]{4}-[\d\w]{4}-[\d\w]{4}-[\d\w]{12}\.ps1'(\sFalse)? DEST_KEY=queue FORMAT=nullQueue Matches when I use regex checkers like regexr. I've seen people suggest both double backslash and four backslashes for escaping, not sure which is appropriate for transforms. Using the four backslashes and the regex command in the search does work, but I need these events filtered out of indexing. I'd also like to use (?i) as a modifier for case insensitivity, but I need to get this part working first.
I use JS submit buttons because the SimpleXML submit button is useless (searchWhenChanged is broken, you can only have one button, cannot inline, cannot have multiple, and cannot have multiple timepi... See more...
I use JS submit buttons because the SimpleXML submit button is useless (searchWhenChanged is broken, you can only have one button, cannot inline, cannot have multiple, and cannot have multiple timepickers, + many other issues). However, unfortunately, post-processing is also useless (export is broken, you can only use a single base search per post-process) so I would like to use loadjob as a base search. However, there is inherently an issue in doing so. If you do something like the below: ... | append [| loadjob "$exchange$"] | append [| loadjob "$badge$"] | append [| loadjob "$wineventlog$"] | eval trigger="$submit_trigger1$" Then on the first run the search is going to wait until all tokens have been populated because they start as null. This means that when you click submit the first time the "search is waiting for input" message is going to stay. The second submission onwards, the "search is waiting for input" message will function as normal and change since no tokens are null any longer, but also the search will keep getting updated each time another base search has completed, instead of just once when they all have completed. Is Splunk capable of using multiple base searches per post-process, AND using a normal, non-broken submit button?
which deployer push mode is best or rather, what are the benefits of not going with the default 'merge_to_default'? just trying to understand what the community does in SHC because we usually use the... See more...
which deployer push mode is best or rather, what are the benefits of not going with the default 'merge_to_default'? just trying to understand what the community does in SHC because we usually use the default first time round (new app) and from there, revert to 'local_only' mode..
Hi, I am the admin for a clustered Splunk environment, we are running Enterprise Splunk version 7.3.2. I have several different Apps that my customers use and access to each app is managed via Ac... See more...
Hi, I am the admin for a clustered Splunk environment, we are running Enterprise Splunk version 7.3.2. I have several different Apps that my customers use and access to each app is managed via Active directory groups and LDAP authentication. I am struggling with the settings for an alert in a particular app. I am trying to determine how I can make the "Advanced Edit" option available to all users who have access to this particular App. I have ensured that all roles associated with the App have write permissions for the alert. The only way I can get the "Advanced Edit" option to display is to make an individual user the owner of the alert. When I make a user the owner of the alert, only the owner and admin can see the "Advanced Edit" option from the Searches, Reports, and Alerts UI screen. Am I missing a setting somewhere? Any help you provide is greatly appreciated!!
We have the AWS Add-On installed and up and running but we are not getting memory utilization for our Lambda execution logs. We get the allocated memory but I am wanting to see if I can get the actua... See more...
We have the AWS Add-On installed and up and running but we are not getting memory utilization for our Lambda execution logs. We get the allocated memory but I am wanting to see if I can get the actual memory so then I could compare it with the allocated memory. For AWS Lambda on our HF(s) we have it set for a row for with these two dimension values: [{"FunctionName":["."],"Resource":["."]}] [{"FunctionName":[".*"]}] With the metrics set to ALL and Metric stats set to: Average, Sum, SampleCount, Maximum, Minimum for both dimensions. Within AWS's "Insights" with CloudWatch Log Groups you can get this information now and it looks like Splunk syntax to do so. But since I already have the other data and reports in Splunk I'd prefer to be able to do this analysis there. Thanks for your guidance on this! AWS Insights query using "maxMemoryUsed" that I'd like to get into Splunk via the AWS Add-On to do the same thing there: filter @type = "REPORT" | stats max(@memorySize / 1024 / 1024) as provisonedMemoryMB, min(@maxMemoryUsed / 1024 / 1024) as smallestMemoryRequestMB, avg(@maxMemoryUsed / 1024 / 1024) as avgMemoryUsedMB, max(@maxMemoryUsed / 1024 / 1024) as maxMemoryUsedMB, provisonedMemoryMB - maxMemoryUsedMB as overProvisionedMB
I'm using AppD 4.5.11.3848 and Oracle JDK 1.8.0_231. I've seen some incorrect process CPU% reported by the java agent.  It appears that the number is being derived as follows: Process CPU burnt (ms... See more...
I'm using AppD 4.5.11.3848 and Oracle JDK 1.8.0_231. I've seen some incorrect process CPU% reported by the java agent.  It appears that the number is being derived as follows: Process CPU burnt (ms/min)/60000/Runtime.availableProcessors() In other words, the number of milliseconds of CPU used in a minute, divided by the number of milliseconds in a minute, divided by the result of Runtime.availableProcessors().  So, if there are 4 CPUs and the "cpu burnt" number is 30000, then we'd have 30000/60000/4, which is 12.5% However, the Runtime.availableProcessors() method doesn't always report the actual number of processors.  Instead, if -XX:ActiveProcessorCount=n is set, it uses that.  In my example, if there were actually 8 CPUs but -XX:ActiveProcessorCount=4 is set, then AppD will report 12.5% when the actual number should be 6.25% (divide by 8 instead of 4.) The active processor count does not actually limit the number of CPUs the process can use so I think this method of calculating process CPU % fails when the active processor count is set to something other than the actual value. thanks
All, Has anyone ever setup Google Authenticator with Splunk for 2fa? Any walk throughs available for Splunk Enterprise 8?