All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have tried quite a few different ways to capture data within a json object and return it as separate events, but my break-down always occurs when attempting to access the data in the nested arrays.... See more...
I have tried quite a few different ways to capture data within a json object and return it as separate events, but my break-down always occurs when attempting to access the data in the nested arrays. { "Items": [ { "CN": "AccountName", "CV": "AccountOne", "Props": [ { "PN": "PropOne", "PV": "5" }, { "PN": "PropTwo", "PV": "3" } ] }, { "CN": "AccountName", "CV": "AccountOne", "Props": [ { "PN": "PropOne", "PV": "5" } ] }, { "CN": "AccountName", "CV": "AccountTwo", "Props": [ { "PN": "PropOne", "PV": "5" }, { "PN": "PropThree", "PV": "8" } ] }, { "CN": "PersonName", "CV": "Bob", "Props": [ { "PN": "PropOne", "PV": "5" }, { "PN": "PropThree", "PV": "8" } ] }, { "CN": "PersonName", "CV": "Bob", "Props": [ { "PN": "PropThree", "PV": "8" } ] } ] } Prepared version for testing: { \"Items\": [ { \"CN\": \"AccountName\", \"CV\": \"AccountOne\", \"Props\": [ { \"PN\": \"PropOne\", \"PV\": \"5\" }, { \"PN\": \"PropTwo\", \"PV\": \"3\" } ] }, { \"CN\": \"AccountName\", \"CV\": \"AccountOne\", \"Props\": [ { \"PN\": \"PropOne\", \"PV\": \"5\" } ] }, { \"CN\": \"AccountName\", \"CV\": \"AccountTwo\", \"Props\": [ { \"PN\": \"PropOne\", \"PV\": \"5\" }, { \"PN\": \"PropThree\", \"PV\": \"8\" } ] }, { \"CN\": \"PersonName\", \"CV\": \"Bob\", \"Props\": [ { \"PN\": \"PropOne\", \"PV\": \"5\" }, { \"PN\": \"PropThree\", \"PV\": \"8\" } ] }, { \"CN\": \"PersonName\", \"CV\": \"Bob\", \"Props\": [ { \"PN\": \"PropThree\", \"PV\": \"8\" } ] } ] } I would want this to result in something like: Thanks!
Hello, I think there is a permission issue after my Linux system admin changed access to a directory. Now my SH stopped. Either I run splunk status or start, I received the error message: Removin... See more...
Hello, I think there is a permission issue after my Linux system admin changed access to a directory. Now my SH stopped. Either I run splunk status or start, I received the error message: Removing stale pid file... Can't unlink pid file "/opt/splunk/var/run/splunk/splunkd.pid": Permission denied My splunkd.pid file permission set as following: -rwxrwxrwx. 1 splunk splunk 364 Apr 21 10:50 splunkd.pid Anyone has resolved the same issue? Thank you,
I've got a bunch of custom syslog traffic flowing to a fluentd tier I have running in kubernetes. I'm using the rewrite_tag_filter plugin to set the tag of all the events to their target index. I the... See more...
I've got a bunch of custom syslog traffic flowing to a fluentd tier I have running in kubernetes. I'm using the rewrite_tag_filter plugin to set the tag of all the events to their target index. I then use another layer of that plugin to add the host and sourcetype values to the tag. I'm sending all of that to the same output: @type splunk_hec index main sourcetype ${tag_parts[1]} host ${tag_suffix[2]} source ${tag} hec_host HEC_Host hec_port HEC Port hec_token HEC Token ca_file /fluentd/etc/server.pem In the configs above I'd like to target different parts of the tag to configure my index, sourcetype, and host dynamically. The sourcetype and host lines translate those directly to a string, so in Splunk for example I see the host field literally set to "${tag_suffix[2]}" But the source field I'm setting as a test work and the source field in Splunk contains the whole tag. How can I target and utilize parts of the tag to configure my settings? Or is there a better way to set these values? Trying to avoid index time operations on my indexers. Thanks! Sources: I found the prefix, suffix, and parts for tag targeting in record transformer and wasn't sure if they would work https://docs.fluentd.org/filter/record_transformer Fluentd to Hec plugin, latest version https://github.com/splunk/fluent-plugin-splunk-hec
I am unable to get forwarders to show up in the console after installing server/forwarder. Getting "no clients or apps are currently available on this deployment server". I installed Splunk versio... See more...
I am unable to get forwarders to show up in the console after installing server/forwarder. Getting "no clients or apps are currently available on this deployment server". I installed Splunk version 8.0.3 on RHEL 7.7 Server, and configured the indexer to listen on 9997 and app server on 8088. Opened these ports on Linux server: 8000 - Web 8088 - App Server (The others show open but this port still shows closed when tested with a port scan, I noticed it is listening on 127.0.0.1:8088 where the others are 0:0:0:0:port) 8089 - Management Port 8191 - KV Store 9997 - Indexer Installed Universal Forwarder on Windows 10 - 1909 x64 client using this command: msiexec "-i \\pathtomsi.msi -qn -l C:\logpath.log AGREETOLICENSE=YES SPLUNKUSERNAME=USER SPLUNKPASSWORD=PASS RECEIVING_INDEXER=server.domain.net:9997 WINEVENTLOG_APP_ENABLE=1 WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 WINEVENTLOG_SET_ENABLE=0 WINEVENTLOG_FWD_ENABLE=0" Any idea what is going wrong here or what I can do to troubleshoot the issue?
Looking to exclude certain values for field instance. How can I achieve this? Propose code (not working) index=abc sourcetype=xyz IncomingMessage | rex field=source "/apps/prod/(?<instance>.*)/l... See more...
Looking to exclude certain values for field instance. How can I achieve this? Propose code (not working) index=abc sourcetype=xyz IncomingMessage | rex field=source "/apps/prod/(?<instance>.*)/logs" | where NOT LIKE instance IN ("foo", "bar")
Hi All, so i clustered my search heads and added them to my index cluster. However it broke all my lookup tables. I took a backup of /opt/splunk/etc before I did this on the search head with all... See more...
Hi All, so i clustered my search heads and added them to my index cluster. However it broke all my lookup tables. I took a backup of /opt/splunk/etc before I did this on the search head with all the lookups. I guess my question is, is this as simple as just pasting the old lookups into the lookup /system/lookups/ IF SO, where do i do this on? the mgmt_uri? or the captain search head? if not? can you paste a doc to follow?
Hello, As default configuration, addon for MS SCOM ingest all objects from defined group from SCOM database. For group internal i have more than 30000 objects from SCOM. To ingest this data into... See more...
Hello, As default configuration, addon for MS SCOM ingest all objects from defined group from SCOM database. For group internal i have more than 30000 objects from SCOM. To ingest this data into SCOM it takes almost 10 minutes and 95% of objects i don't need it. Is there any possibility to limit request which is sending to SCOM? For example to put some filter into the request like below? get-scomclass | ?{$.name -like 'DataONTAP.Cluster.NodeVserver.Aggregate' -or $.name -like 'DataONTAP.Cluster.DataVserver.Volume'} | Get-SCOMClassInstance example of config of input file [powershell://_Splunk_TA_microsoft_scom_internal_used_Status] index = demo schedule = 0 */10 * ? * * script = & "$SplunkHome\etc\apps\Splunk_TA_microsoft-scom\bin\scom_command_loader.ps1" -groups "internal" -server demo_server thanks for any clue
Dears, Maps+ has API for Google and Bing. However, I would like to add a layer based on open-source data using an API (please see picture below). Thanks in advance!
Hi I have two events with following fields Event 1 Log.Status : IN TransactionTime : IN time Tracking id: Unique ID (common between Event 1 and 2) Event 2 Log.Status :OUT Transacti... See more...
Hi I have two events with following fields Event 1 Log.Status : IN TransactionTime : IN time Tracking id: Unique ID (common between Event 1 and 2) Event 2 Log.Status :OUT TransactionTime: OUTtime Trackingid I merged the events using tracking ID and i got the required fields. But under transactionTime, I am seeing two times (IN and OUT) one below the other. Is there a way to list the times as separate fields in the merge results.
Hello, I need to evaluate my _time against a list of times output from a lookup table and produce a calculated field "nextPeriodTime" which is the next time after _time. I came quite close to ... See more...
Hello, I need to evaluate my _time against a list of times output from a lookup table and produce a calculated field "nextPeriodTime" which is the next time after _time. I came quite close to the final desired result by using a combination of eval, forearch and mvfilter. My search query index="nxs_mq" | table interstep _time | lookup params_vacations.csv interstep OUTPUT 0900,1000,1100,1200,1300,1400,1500,1600,1700 | eval nextPeriodTime="" | eval eventTime=strftime(_time,"%H%M") | eval eventDate=strftime(_time,"%Y-%m-%d") | foreach *00 [| eval nextPeriodTime = if(isnotnull(mvfilter(match('<<FIELD>>',"Y"))),'<<MATCHSTR>>',"N/A")] Results The problem is, for example on the first record, I need the value of nextPeriodTime to be "2020-04-21 12:00:00" instead of "17" (plus, I didn't quite understand why "17" ?) because _time is after 11:00 and before 12:00, so the next period time from _time will be 12:00. Is it possible / does it make sense to enrich the mvfilter() part to achieve something like mvfilter(match('<<FIELD>>',"Y") AND (_time > previous-'<<MATCHSTR>>' AND _time < next-'<<MATCHSTR>>') ) Thanks in advance for your help
I do not have any admin privilege in my Splunk instance and cannot change any configuration. Need to search an index for any value matching what's in a lookup file. The problem is the file contains... See more...
I do not have any admin privilege in my Splunk instance and cannot change any configuration. Need to search an index for any value matching what's in a lookup file. The problem is the file contains 130K records and I get a maxout truncation at 10K. How do I overcome this? If you plan to tell me to split it up into 10K files, please don't. index="bro" [ inputlookup bad_domains | fields domain ] |stats values(domain) by _time
On Splunk Enterprise 7.3.4, IT Service Intelligence 4.4.2. Every time we open a glass table created with the beta glass table editor, we get an info pop-up at the top stating "Click here to access ... See more...
On Splunk Enterprise 7.3.4, IT Service Intelligence 4.4.2. Every time we open a glass table created with the beta glass table editor, we get an info pop-up at the top stating "Click here to access documentation and provide feedback on the beta glass table editor. more" Users are tired of closing that every time they open a glass table. Is there a way to disable this message? I can't seem to find it.
Hello, I've always had trouble with automatic lookups and every time I manage to do it it seems that I do it differently many times before it somehow works by magic. I have a lookup table and ... See more...
Hello, I've always had trouble with automatic lookups and every time I manage to do it it seems that I do it differently many times before it somehow works by magic. I have a lookup table and a lookup definition. In the lookup table and the event logs I have a field to match that is in most events and I want to apply every field in the lookup table (30+ fields) to the events that included the matched field. I've tried it two different ways. I have one table where the field name is different than in the source and I have a second table where the field name is the same. eventlogs: _time,ID,fieldx,fieldy,fieldz. Lookuptable_v1: ID,fielda-fieldw Lookuptable_v2: ID code, field a - field w. I've tried matching the automatic input fields with the source where they were different, ID - ID code (and vice-versa). I've also tried just where the tables have the same field name. Since I have 20+ fields I'm only adding one or two to the output areas until I get it to work, but it is not working. I either get an error that some fields have not been matched, OR it seems just like nothing has happened at all. What gives?
Guys, I am trying to specifically see if I can distinguish when the login attempts are coming from an external source. What else can I add to this search? ```| tstats summariesonly=t allow_old_s... See more...
Guys, I am trying to specifically see if I can distinguish when the login attempts are coming from an external source. What else can I add to this search? ```| tstats summariesonly=t allow_old_summaries=t prestats=t count(Authentication.user) from datamodel=Authentication where Authentication.action=block* OR Authentication.action=fail* groupby _time span=1h, Authentication.user | tstats append=t summariesonly=t allow_old_summaries=t prestats=t count from datamodel=Authentication where Authentication.action=success groupby _time span=1d, Authentication.user | stats count count(Authentication.user) by Authentication.user | rename count as successes count(Authentication.user) as failures Authentication.* as * | where successes>0 AND failures>100 | sort - failures | head``` I also tried the following search, but it didn't work because sometimes the source could be machine name or host name as letters: NOT (src_ip="10.0.0.0/8" OR src_ip="172.16.0.0/12" OR src_ip="192.168.0.0/16")
How to get a time chart of license usage based on source for the last 7 days. I tried the monitoring console to get the intended results but, it doesn't seem to give out the results for all sourc... See more...
How to get a time chart of license usage based on source for the last 7 days. I tried the monitoring console to get the intended results but, it doesn't seem to give out the results for all source types. Regards.
I am trying to set up a multiselect with dynamic options and want to use a scheduled report to do so, when I try to select my report it doesn't show up in the list, probably because the list is too l... See more...
I am trying to set up a multiselect with dynamic options and want to use a scheduled report to do so, when I try to select my report it doesn't show up in the list, probably because the list is too large. However, I see something that says "This does not contain all reports. Add a report that is not listed from Reports." I can click on "Reports" but it just takes me to the list of reports and I can't find a way to select this report to be used for my Dynamic Options for my multiselect input
I need help on how I can tune the search below. It creates too much noise. I will like to know what steps I can use to tune or if there is better SPL available that can be used. | tstats summar... See more...
I need help on how I can tune the search below. It creates too much noise. I will like to know what steps I can use to tune or if there is better SPL available that can be used. | tstats summariesonly=t dc("DNS.query") as count from datamodel=Network_Resolution where nodename=DNS "DNS.message_type"="QUERY" NOT (`cim_corporate_web_domain_search("DNS.query")`) NOT "DNS.query"="*.in-addr.arpa" NOT ("DNS.src_category"="svc_infra_dns" OR "DNS.src_category"="svc_infra_webproxy" OR "DNS.src_category"="svc_infra_email*" ) by "DNS.src","DNS.query" | rename "DNS.src" as src "DNS.query" as message | eval length=len(message) | stats sum(length) as length by src | append [ tstats summariesonly=t dc("DNS.answer") as count from datamodel=Network_Resolution where nodename=DNS "DNS.message_type"="QUERY" NOT (`cim_corporate_web_domain_search("DNS.query")`) NOT "DNS.query"="*.in-addr.arpa" NOT ("DNS.src_category"="svc_infra_dns" OR "DNS.src_category"="svc_infra_webproxy" OR "DNS.src_category"="svc_infra_email*" ) by "DNS.src","DNS.answer" | rename "DNS.src" as src "DNS.answer" as message | eval message=if(message=="unknown","", message) | eval length=len(message) | stats sum(length) as length by src ] | stats sum(length) as length by src | where length > 10000
Is there a best practice/method to accomplish the following? We would like to create an user that has very limited role. Log in, and view only specified dashboards. But explicitly DENY them t... See more...
Is there a best practice/method to accomplish the following? We would like to create an user that has very limited role. Log in, and view only specified dashboards. But explicitly DENY them the right to search. The dashboards we want them to have access to does have real time data searches. What I have tried: 1. I have tried creating a role (dashboards_only) that only allowed: change_own_password get_metadata rest_properties_get search rtsearch but that still allows them to use the search and reporting app, to do searches. 2. If i take away search and rtsearch from that role, then the dashboards do not grab any data. 3. Changed the permissions on the "search and reporting" app to not allow "dashboards_only" role to read/write. But that also breaks the queries on the dashboards. Any thoughts on accomplishing this?
I am new on Splunk. I am using Infosec app and I have question please. I am getting logs from the firewall after executing this command: | datamodel Network_Traffic All_Traffic search But the N... See more...
I am new on Splunk. I am using Infosec app and I have question please. I am getting logs from the firewall after executing this command: | datamodel Network_Traffic All_Traffic search But the Network_Traffic data model doesn't show any results after this request: | tstats summariesonly=true allow_old_summaries=true count from datamodel=Network_Traffic.All_Traffic where (All_Traffic.action=blocked OR All_Traffic.action=deny) Any idea how to resolve this??
When search query for report is run in Splunk, it shows 15000 events for last 7 days (example 4/14-4/21) but when I export results in csv, it exports events only for 4/21. I have reset dispatch.max... See more...
When search query for report is run in Splunk, it shows 15000 events for last 7 days (example 4/14-4/21) but when I export results in csv, it exports events only for 4/21. I have reset dispatch.max_count to 0 but still it doesn't retrieve all events in csv when report runs.