All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Ok so my data is coming from a vulnerability management system. every day i get a dump of every vulnerability in the system. Each unique vulnerability on every asset is given a UniqueAssetVulnID. Tha... See more...
Ok so my data is coming from a vulnerability management system. every day i get a dump of every vulnerability in the system. Each unique vulnerability on every asset is given a UniqueAssetVulnID. That id is specific to that vulnerability on that asset day over day. Now I would like to identify when a vulnerability has been remediated IE appeared on yesterdays scan but not on todays scan by Category which is just the severity. This would all be plotted on a area chart. Sample data would be like _time Category UniqueAssetVulnID 05/26/2020 Low 1249+cve-2020-3948 05/27/2020 High 5239+cve-2010-4533 index=rapid7 sourcetype="VulnData" | streamstats current=f last(dc(UniqueAssetVulnID)) as UniqueVulnslast_count by Category | rename UniqueAssetVulnID as current_UniqueAssetVuln | eval delta = UniqueVulnslast_count - current_UniqueAssetVuln | timechart span=1d delta by Category useother=f
I’m trying to apply a color logic to a specific column in a table by range and thresholds. I have 1000 rows in that table, with 10 rows presented in each page. The range of colors should be the... See more...
I’m trying to apply a color logic to a specific column in a table by range and thresholds. I have 1000 rows in that table, with 10 rows presented in each page. The range of colors should be the same for all the values in the table, not only those that are presented in the current page. As suggested here I tried using the following method: <format type="color" field="kw_blocks / total_kw_blocks"> <colorPalette type="list">[#DC4E41,#F8BE34,#53A051]</colorPalette> <scale type="threshold">33,66</scale> </format> <format type="number" field="kw_blocks / total_kw_blocks"> <option name="unit">%</option> </format> The only issue in this solution is that it uses constant thresholds: <scale type="threshold">33,66</scale> However In my case I don't know in advance what will be the max value and therefore I am getting it from the query search dynamically. Therefore I would like the thresholds to be percentile of this value. It will look something like this: <scale type="threshold">0.33*Max(kw_blocks / total_kw_blocks),0.66*Max(kw_blocks / total_kw_blocks)</scale> Any idea how to do it?
Hi, Just installed the app on a universal forwarder and getting this error in the log. Any idea what the issue is? Is there any configuration I need to edit other than inputs.conf? Thanks. Server... See more...
Hi, Just installed the app on a universal forwarder and getting this error in the log. Any idea what the issue is? Is there any configuration I need to edit other than inputs.conf? Thanks. Server has Python 2.7.5 and I created symlink: splunkforwarder/bin/python2.7 -> /usr/bin/python file_meta_data_modular_input.log : 2020-05-27 13:04:35,867 ERROR Execution failed Traceback (most recent call last): File "/home/opc/splunk/splunkforwarder/etc/apps/file_meta_data/bin/modular_input.zip/modular_input/modular_input_base_class.py", line 1095, in execute self.do_run(in_stream, log_exception_and_continue=True) File "/home/opc/splunk/splunkforwarder/etc/apps/file_meta_data/bin/modular_input.zip/modular_input/modular_input_base_class.py", line 976, in do_run self.run(stanza, cleaned_params, input_config) File "/home/opc/splunk/splunkforwarder/etc/apps/file_meta_data/bin/file_meta_data.py", line 621, in run file_filter=file_filter) TypeError: get_file_data() got an unexpected keyword argument 'file_filter' inputs.conf: [file_meta_data://vendor_data] interval = 5m file_hash_limit = 500MB file_path = /home/vendor_files/ recurse = 0 only_if_changed = 0 include_file_hash = 0 disabled = 0
So I have a Universal forwarder installed on a Windows system (v7.3.3) and I have it set up to communicate with my Splunk Enterprise server (v. 7.3.4). The Windows system has checked into Splunk, whe... See more...
So I have a Universal forwarder installed on a Windows system (v7.3.3) and I have it set up to communicate with my Splunk Enterprise server (v. 7.3.4). The Windows system has checked into Splunk, when I look at the Web GUI (Settings > Forwarder Management). I think that my problem has to do with SSL, but I am not sure how or where to change this setting in order to make it work. The errors that I am getting are: ERROR TcpInputProc - Error encountered for connection from src=x.x.x.x:55681. error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number ERROR TcpInputProc - Error encountered for connection from src=x.x.x.x:55682. error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number ERROR TcpInputProc - Error encountered for connection from src=x.x.x.x:53711. error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number ERROR TcpInputProc - Error encountered for connection from src=x.x.x.x:53712. error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number Any help would be greatly appreciated. I am new to Splunk!
I'm trying to modify the below Splunk app to perform additional sourcetype extraction. TA-Pfsense App I have data coming in over syslog, and being saved as sourcetype "pfsense."The TA performs a t... See more...
I'm trying to modify the below Splunk app to perform additional sourcetype extraction. TA-Pfsense App I have data coming in over syslog, and being saved as sourcetype "pfsense."The TA performs a transforms-extract on the pfsense sourcetype in props.conf based on regex in transforms.conf that looks for a timestamp at the beginning of the event. It then extracts the pfsense log type(ex. filterlog, dhcpd, openvpn), which is typically after the timestamp, and sets it as the sourcetype. transforms.conf [pfsense_sourcetyper] REGEX = \w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s(?:[\w.]+\s)?(\w+)DEST_KEY = MetaData:SourcetypeFORMAT = sourcetype::pfsense:$1 props.conf [pfsense] TRANSFORMS-pfsense_sourcetyper = pfsense_sourcetyper SHOULD_LINEMERGE = falseSEDCMD-event_cleaner = s/^(\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s)+\S+\.\S+\s+/\1/g SEDCMDevent_cleaner2 = s/^(\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s)+(\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s)+/\1/g SEDCMD-event_cleaner3 = s/^\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s\S+\s(\S+\s)/\1/g So the above from the TA works fine, except I configured a snort feed in Unified2 format that I want to be able to send to pfsense:snort sourcetype, but it doesn't work because it has a completely different format. I tried to tweak the TA by adding in another transforms-extract but it does not work. I've tried different variations of regex to match the log format but I've been unable to get it to work so far. Any thoughts? Raw Log(IP's redacted): | [SNORTIDS[LOG]: [pf.local] ] || 2020-05-27 16:15:31.157+000 2 [1:2403468:57488] ET CINS Active Threat Intelligence Poor Reputation IP TCP group 85 || misc-attack || 6 89.XXX.XXX.XXX 2XX.XXX.XXX.XXX 4 20 0 40 1773 0 0 17229 0 || 51267 4303 1181682881 0 5 0 2 1024 1546 0 || 64 ..g.KlL..p....E..(......CMY...d....C..Fo......P................. || Added to transforms.conf [pfsense_snort] REGEX = (?:\| \[)(SNORT) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::pfsense:snort Added to props.conf [pfense:snort] TRANSFORMS-pfsense_snort = pfsense_snort
I have 6 alerts and each send 6 mails when triggered, This clutters the inbox of receivers of the alerts. Is there a way to have one single mail, with all alerts data listed in it.
I currently have a Splunk Cloud instance running 8.0. This add-on is not compatible with Splunk Cloud or 8.0. If I were to install an HF on premise running 7.3 and send the logs to Splunk Cloud I'm... See more...
I currently have a Splunk Cloud instance running 8.0. This add-on is not compatible with Splunk Cloud or 8.0. If I were to install an HF on premise running 7.3 and send the logs to Splunk Cloud I'm pretty sure that would work as far as getting data in. Without the add-in in Splunk Cloud would we be missing field extractions and possibly other things?
Hello, I'd like to run an average over the course of May 16, 2020 (24-hours), on a particular IP address. I'd like to see an event count on this IP with an average overlay over the course of a ... See more...
Hello, I'd like to run an average over the course of May 16, 2020 (24-hours), on a particular IP address. I'd like to see an event count on this IP with an average overlay over the course of a 24-hour timespan on the indicated date above. Heres my base query: index=* "IP Address" |timechart count by src |sort -count Suggestions are greatly welcomed and very much appreciated.
How do I read the data on a JSON req received on AppD instrumented server and display that data on a dashboard?
We have an alert that searches our databases for unknown/missing columns or tables. The search runs on an hourly basis, and sends an email if the condition is triggered. The issue we're running into ... See more...
We have an alert that searches our databases for unknown/missing columns or tables. The search runs on an hourly basis, and sends an email if the condition is triggered. The issue we're running into is sometimes we don't have an opportunity to fix the missing column/table before the next hour goes by and the alert gets triggered again--leading to multiple emails for the same issue. We want to basically stop duplicate emails/duplicate triggers. I have read about throttling and that does not seem to be the fix we're looking for. In SQL terms, we'd be wanting an 'if/else' statement... if this missing column/table has already sent an email, don't send again. else, send. any thoughts would be appreciated!
Hi, We are using Splunk Enterprise 7.1.1 version, to develop some predictive models and mail alerts to the specific site peoples is there any deviation. As of now mail alerts are working fine and A... See more...
Hi, We are using Splunk Enterprise 7.1.1 version, to develop some predictive models and mail alerts to the specific site peoples is there any deviation. As of now mail alerts are working fine and As our Management want something like tracking\ticketing system for each and every mail alert. is it possible to create some ticket id in mail itself to track? As of Now mail alerts be like: And What my management is expecting to be like: In Subject itself the Ticket ID need to create and ticket ID is need to log in some index to track the changes of that particular Ticket ID. Is it possible? can you guys please suggest something? and One more thing we already tried of that Alert Manager is create the incident once the alert generated? my management is not satisfied with that. Please suggest?
My first subsearch – and its not going well. I have two queries I need to combine to get a single results table. My first query finds how many changes there have been and when the most recent cha... See more...
My first subsearch – and its not going well. I have two queries I need to combine to get a single results table. My first query finds how many changes there have been and when the most recent change occurred (events in my data will have a ‘no_of_changes' field which is an integer value) index=syncserver sync_job_name=Purchasing sync_job_name!="Stage 1" no_of_changes!=0 | stats sum(no_of_changes) AS "No. of Changes" latest(_time) AS "Last Change" by sync_group_name | convert ctime("Last Change") which produces the “Query 1 results table” below Then I have a second query, which is very similar to the first, but all I really want from it is the date/time of the last event for each ‘syncGroup’. index=syncserver sync_job_name=Purchasing sync_job_name!="Stage 1" | stats sum(no_of_changes) AS "No. of Changes" latest(_time) AS "Last Run" by sync_group_name | convert ctime("Last Run") which produces the “Query 2 results table” above. Note that ‘SyncGroup03’ does not appear in the first set of results as there where no changes in the selected time period. The table I want to produce combines these two sets of results as in “Desired results table” below So I have tried the following query which makes my first query a subsearch of the second query index=syncserver sync_job_name=Purchasing sync_job_name!="Stage 1" | stats sum(no_of_changes) latest(_time) AS “lastRun” by sync_group_name | appendcols [ search index=syncserver sync_job_name=Purchasing sync_job_name!="Stage 1" no_of_changes!=0 | stats latest(_time) AS lastChange by sync_group_name] | convert ctime(lastChange) | rename lastChange AS "Last Change" lastRun AS "Last Run" sum(no_of_changes) AS "No. of Changes" | table sync_group_name "No. of Changes" "Last Change" "Last Run" Which isn’t what I was hoping for (see “Actual results table” above) So the “Last Run” column isn’t getting populated and the time for “Last Change” on ‘SyncGroup04” (04/14/2020 03:05:37) has appeared against SyncGroup03. I also had to use different field names in the query as Splunk complained about the presence of ‘last’ in ‘latest(_time) AS "Last Change"’ once the subsearch was added. Can someone help me with this?
We have an application that has a mix of on-premise static infrastructure and newer services running in Azure. We recently implemented a new .Net Core service as an Azure App Service with autoscalin... See more...
We have an application that has a mix of on-premise static infrastructure and newer services running in Azure. We recently implemented a new .Net Core service as an Azure App Service with autoscaling. The new service is appearing fine in our application flow map, but the node count is just increasing for every scale-up event, and not decreasing on scale-down.  Right now, it looks like we have 97 nodes, when in reality it's running at between 6-8 depending on load. Is there a way to manage this automatically so the node count is correct for this service, preferably without static on-premise nodes disappearing through inactivity during quieter processing periods?
Hello, We got a requirement to secure the communication between Deployment server and UF on port 8089. Can someone help me on below queries. We are managing around 200 servers from a DS and requ... See more...
Hello, We got a requirement to secure the communication between Deployment server and UF on port 8089. Can someone help me on below queries. We are managing around 200 servers from a DS and requirement is to setup secure communication for couple of servers. Can we do this for some server ? if so how to setup this. If we have to do this for all the servers being managed by a DS. can we use DS to push the certificates to UF and what are the configuration steps ? Any help and reference document would be helpful. Thanks, Bijender
Hi Splunkers, We are getting unwanted sub folders when we search for a particular sub folder I am creating a query which displays the file system for a particular folder. But I am getting all the... See more...
Hi Splunkers, We are getting unwanted sub folders when we search for a particular sub folder I am creating a query which displays the file system for a particular folder. But I am getting all the folder names instead of a particular folder name alone. Consider, in server xyz, we are having n number of file system (example, /, /var, /var/abc, /var/abc/cde, etc. ). When I am searching for /var alone by giving that in query, it displays /var and all the sub folders in it. But that is not as expected. Query: index=" " sourcetype= " " mn=/ OR /var | eval Usage=replace(Used,"%","") | timechart values(usage) as Used by mn note: mn means the file system name Expected output: Chart should show only / and /var file systems. Output we are getting now: Chart should show only / and /var and /var/abc and /var/abc/cde
I want to upgrade a system. How do I find the ID for the user that installed it? Is it somewhere in the system?
Hello team, I have been always getting this error on the new input configuration just after selecting the AWS Account (which I could configure successfully before). Because of this I cannot select... See more...
Hello team, I have been always getting this error on the new input configuration just after selecting the AWS Account (which I could configure successfully before). Because of this I cannot select the S3 Bucket. I have tried this configuration with 8.0.3 Splunk Enterprise + 5.0.1 AWS Add-On and 8.0.4 Splunk Enterprise + 5.0.1 AWS Add-On without success so far. This is the error logged in splunkd: 05-27-2020 10:21:02.050 +0200 ERROR AdminManagerExternal - Stack trace from python handler: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 148, in init hand.execute(info) File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 634, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "", line 94, in handleList File "", line 44, in timed splunktaucclib.rest_handler.error.RestError: REST Error [400]: Bad Request -- 'NoneType' object has no attribute 'get_all_buckets' 05-27-2020 10:21:02.050 +0200 ERROR AdminManagerExternal - Unexpected error "" from python handler: "REST Error [400]: Bad Request -- 'NoneType' object has no attribute 'get_all_buckets'". See splunkd.log for more details.
I am providing summarized reports on disk space over several hosts using this query: index=os sourcetype=df host=host1 OR host=host2 | eval CPD_Disk=case( filesystem LIKE "%gas%", "Gas Volum... See more...
I am providing summarized reports on disk space over several hosts using this query: index=os sourcetype=df host=host1 OR host=host2 | eval CPD_Disk=case( filesystem LIKE "%gas%", "Gas Volume", filesystem LIKE "%cadbas%", "CMS Volume", filesystem LIKE "%spg%", "SPG Volume", filesystem LIKE "%gen%", "Generator Volume", filesystem LIKE "%stm%", "Steam Volume" ) | chart eval(sum(UsedMBytes)/1024/1024) as TerraBytes by CPD_Disk| addcoltotals TerraBytes labelfield=CPD_Disk label=Total I would like to provide the total amount of growth over the past 30 days. How could I add something like this ?
I'm in progress of developing an app that need an custom visualization from an existing app (open source) Is there anyway to embed that custom visualization dependency into my current Splunk app. ... See more...
I'm in progress of developing an app that need an custom visualization from an existing app (open source) Is there anyway to embed that custom visualization dependency into my current Splunk app. This is to be self-contained, deployment doesn't require to install that app. Let not talk about the license here ! Thank you very much.
Logs are not coming to splunk enterprise. I've found below error in splunkd.log file in (../splunkforwarder/var/log/splunk/splunkd.log) error: "05-20-2020 10:33:28.196 +0000 WARN FilesystemChangeW... See more...
Logs are not coming to splunk enterprise. I've found below error in splunkd.log file in (../splunkforwarder/var/log/splunk/splunkd.log) error: "05-20-2020 10:33:28.196 +0000 WARN FilesystemChangeWatcher - removed WFS_EXISTS direntname='some_log_path' stat_failure_was_temporary" All the log paths and directories have 755 permissions recursively, but still unable to see logs. Kindly help me on this.