All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Guys, I want to know How can i configure a multiple drilldown on a tablet in Dashboard Studio. I need to put different links in Row and collums, but in Dashboard Studio haver only one way.
Hello, I just started a new position where I've inherited management of large queries that need to be updated periodically. They typically involve having regexes matching on a field and applying a la... See more...
Hello, I just started a new position where I've inherited management of large queries that need to be updated periodically. They typically involve having regexes matching on a field and applying a label to them.  One involves a huge case statement: | eval label=case(match(field,"regex1",label1),match(field,"regex2",label2),match(field,"regex3",label3)...) The regex is updated regularly, hence me wanting to make this more manageable. My first thought was to use a lookup table with the regex & label but I'm open to other suggestions. I did find https://community.splunk.com/t5/Splunk-Search/How-do-I-match-a-regex-query-in-a-CSV and have been able to use regex in the lookup table with a search that was suggested in the solution: | where [| inputlookup regexlookup.csv | eval matcher="match(subject,\"".regex."\")" | stats values(matcher) as search | eval search=mvjoin(search. " OR ")] But I'm wondering how to also apply the label to the results with a lookup like this: regex,label regex1,label1 regex2,label2    Thanks in advance.
Indexers are getting blocked periodically throughout the day, causing our heavy forwarders to stop forwarding data. -- Error on HF -- 07-25-2022 09:22:55.982 -0400 WARN AutoLoadBalancedConnectionSt... See more...
Indexers are getting blocked periodically throughout the day, causing our heavy forwarders to stop forwarding data. -- Error on HF -- 07-25-2022 09:22:55.982 -0400 WARN AutoLoadBalancedConnectionStrategy [5212 TcpOutEloop] - Cooked connection to ip=10.3.13.4:9997 timed out 07-25-2022 09:27:14.858 -0400 WARN TcpOutputFd [5212 TcpOutEloop] - Connect to 10.3.13.4:9997 failed. Connection refused 07-25-2022 10:58:06.973 -0400 WARN TcpOutputFd [5034 TcpOutEloop] - Connect to 10.3.13.4:9997 failed. Connection refused Whenever this was found on a HF the indexer with the IP had closed the port 9997 and indexer's queues were full.   -- Log messages on the indexer: 07-26-2022 08:51:28.857 -0400 ERROR SplunkOptimize [37311 MainThread] - (child_304072__SplunkOptimize) optimize finished: failed, see rc for more details, dir=/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_70, rc=-4 (unsigned 252), errno=1 07-26-2022 08:56:51.759 -0400 INFO IndexWriter [37662 indexerPipe_1] - The index processor has paused data flow. Too many tsidx files in idx=_metrics bucket="/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_70" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.  
I'm trying to override the host metadata with a regex on source but it's not working as expected.  The events are arriving at the indexers with the forwarding server's hostname  I've referenced a few... See more...
I'm trying to override the host metadata with a regex on source but it's not working as expected.  The events are arriving at the indexers with the forwarding server's hostname  I've referenced a few similar issues from the community site in my attempt but I'm not having much luck.  Can anyone see what's wrong please?  In the below example the host metadata should be overridden with host1. Input [monitor:///opt/pyLogger/logs/host1] SHOULD_LINEMERGE = False sourcetype = network Props [network] TRANSFORMS-host=overridehost Transforms [overridehost] SOURCE_KEY = MetaData:Source DEST_KEY = MetaData:Host REGEX = (\w+)$ FORMAT = host::$1 The data is arriving at the indexers with the host set to the server the forwarder is running on.  
From time to time some searches or alerts are not running. Owners of the KB found those are in the savedsearches with some attributies missing including "search = ". Each time we found the issue fro... See more...
From time to time some searches or alerts are not running. Owners of the KB found those are in the savedsearches with some attributies missing including "search = ". Each time we found the issue from some objects we corrected them by recreating with the attributes but it keeps happening to many alerts/searches. We are using the version 8.2.1 with SHC of 12 members.
I am trying to build a trending dashboard of the count of tickets we get. With the below query, I am trying to display the increase/decrease in the count of tickets we got for each type for the past ... See more...
I am trying to build a trending dashboard of the count of tickets we get. With the below query, I am trying to display the increase/decrease in the count of tickets we got for each type for the past two weeks.   index=tickets Status IN ("Open", "Pending") earliest=-14D@w0 latest=@w0 | dedup ticketid | eval ticket_type=case(like(Tags,"%tag1%"),"Type1", like(Tags,"%tag2%") AND !like(Tags,"%tag1%"), "Type2", like(Tags,"%tag3%") AND !like(Tags,"%tag1%") AND !like(Tags,"%tag2%") , "Type3") | timechart usenull=f span=1w count by ticket_type   The problem is whenever we have more count(tickets) for the previous week, it shows the data with a "-" minus sign.  Question: 1. Not able to understand why a "-" would be there for a count.   2. Is there a way to suppress the "-" sign ?
Hello,  I'm just having a bit of difficulty differentiating between Splunk Enterprise, ITSI, SOAR, UBA, and Enterprise Security. It seems like they all do similar things.  Do they all work together?... See more...
Hello,  I'm just having a bit of difficulty differentiating between Splunk Enterprise, ITSI, SOAR, UBA, and Enterprise Security. It seems like they all do similar things.  Do they all work together? or would it be redundant to have all of these at the same time?
Hi,  I am new to Splunk so I was wondering if somebody could help me with this Problem:  I have deployed Splunk Standalone with the splunk-operator in my Kubernetes-Cluster. Now I would like to con... See more...
Hi,  I am new to Splunk so I was wondering if somebody could help me with this Problem:  I have deployed Splunk Standalone with the splunk-operator in my Kubernetes-Cluster. Now I would like to connect to a MS SQL Express DB with DB Connect. The installation of DB Connect works fine, but when I want to configure DB Connect, I need to give the Task Server a valid JRE path. The Standalone Version has no Java installed, so there is no path I can refer to. How can I install Java on the Splunk-Standalone? I found this in the docs, but I don't understand quiet how to do it this way ... (https://github.com/splunk/splunk-ansible/blob/develop/docs/advanced/default.yml.spec.md) Best regards 
Is there a way to use multivalue fields in SOAR? I have not been able to find a good article on how to do this. We have a few sets of logs that use multivalue fields. By default the SOAR export ap... See more...
Is there a way to use multivalue fields in SOAR? I have not been able to find a good article on how to do this. We have a few sets of logs that use multivalue fields. By default the SOAR export app will create individual artifacts with only 1 value in a field if the event has multiple values in a field. I know you can turn on the option in the SOAR export app to send all values as a mutivalue field to SOAR but the issue is when the actions inside of a playbook sees the data in a multivalue field they usually fail.  Here is an example of how an Artifact looks when I send an event as a multi value field to SOAR. md5_hash: ["55df197458234c1b48fed262e1ed2ed9","55df1974e1b8698765fed262e1ed2ed9"] sha1_hash: ["b50e38123456ee77ab562816ab0b81b2ab7e3002","b50e3817d416ee77ab562816ab0b81b2ab7e3002"] sha256_hash:["b8807c0df1ad23c85e42104efbb96bd61d5fba97b7e09a9e73f5e1a48db1e27e","b8807c0df1ad23c81234504efbb96bd61d5fba97b7e09a9e73f5e1a48db1e27e"] domains: ["some1.domain1.com","some.domain.com"] urls:["https://domain.com/uri/uri/picture.png ","https://some1.domain1.com/uri/uri/uri/something.gif ","https://some2.domain2.com/uri/uri/uri/something.gif ","https://some3.domain3.com/uri/uri/uri/something.gif.gif "] Here is an example Error if I were to try and get the IPs of the domains field Error Code: Error code unavailable. Error Message: None of DNS query names exist: ['some1.domain1.com',\032'some.domain.com']., ['some1.domain1.com',\032'some.domain.com'].localdomain. type = A domain = ['some1.domain1.com','some.domain.com'] I believe the playbook is seeing the data in the field as a single set of data and not a list so it is literally querying the A records for "['some1.domain1.com',\032'some.domain.com']" and not "some1.domain1.com" and "some.domain.com" How can I split these multi value fields up so that the playbook runs them as a for loop and outputs the results as one block of data 
Prior to upgrading to Splunk Enterprise 9.0 (we were on 8.2.6), when creating or editing a role, the indexes tab had a full list of our indexes. After the upgrade, existing roles still show the check... See more...
Prior to upgrading to Splunk Enterprise 9.0 (we were on 8.2.6), when creating or editing a role, the indexes tab had a full list of our indexes. After the upgrade, existing roles still show the checked indexes, but are missing the other available indexes. When creating a new role almost all indexes are missing from the list. We are running a SHC and Index cluster. I have seen this issue in the past, and we had to deploy a list of our indexes to our SHC. Other possible fix is to allow (All non-internal indexes) and add Restrictions. Anyone else have this issue or know of a fix?
Hello, I am new to splunk, I have no idea, and I am asking for your help, this is my question: Can we force a query to launch first? it would be launching the query: |rest /servicesXY/-/-/saved/sea... See more...
Hello, I am new to splunk, I have no idea, and I am asking for your help, this is my question: Can we force a query to launch first? it would be launching the query: |rest /servicesXY/-/-/saved/searches timeout=0 before the rest. I thank you very much for your time and help.
Hi, After installing the appdynamics agent API in MAUI on Visual Studio 2022 Preview the project does not compile but in xamarin.forms it does. The error it gives me is this: Error AMM0000 Attribu... See more...
Hi, After installing the appdynamics agent API in MAUI on Visual Studio 2022 Preview the project does not compile but in xamarin.forms it does. The error it gives me is this: Error AMM0000 Attribute application@appComponentFactory value=(androidx.core.app.CoreComponentFactory) from AndroidManifest.xml:24:18-86 is also present at AndroidManifest.xml:22:18-91 value=(android.support.v4.app.CoreComponentFactory).Suggestion:add'tools:replace="android:appComponentFactory"' to element at AndroidManifest.xml:12:3-29:17 to override. I already tried to put tools:replace="android:appComponentFactory" in of the android manifest, also added xmlns:tools="schemas.android.com/tools" in for tools to work. But nothing worked, the same error appeared. Thanks.
Hi, I need to report on when a Notable alert was changed from the default "unassigned" status to " Acknowledged" status and from Acknowledged to "Resolved" along with the time difference it took betw... See more...
Hi, I need to report on when a Notable alert was changed from the default "unassigned" status to " Acknowledged" status and from Acknowledged to "Resolved" along with the time difference it took between each status.  Basically, we are trying to create a dashboard of all alerts whose SLA was missed. We have an SLA for 10 mins for a notable alert to be picked up,  meaning an analyst should change its default "unassigned" status to " Acknowledged" status.  Likewise,  there is SLA for 30 mins to further change from Acknowledged to Resolved.  Running the following query, Splunk shows the _time value for each alert when it was Acknowledged and when Resolved.   But it does NOT show when the alert was triggered/generated.   So that does not leave me with any starting point to compare against.     | `incident_review` | table _time rule_id rule_name owner reviewer status_label | where _time > relative_time(now(),"-1d@d") | eval Status_Time=strftime(_time,"%Y-%m-%d %H:%M:%S")     Output: _time rule_id rule_name owner reviewer status_label 07 July 2022 08:00:00 xxxxx AWS001_xx John John Acknowledged 07 July 2022 08:10:00 xxxxx AWS001_xx John John Resolved 07 July 2022 08:01:00 yyyyy AWS002_xx Jerry Jerry Acknowledged 1)  How can i compose a query to show me list of all alerts (rule_name) which were acknowledged more than 10 mins late and resolved more than 30 mins late ? I am assuming this will involve some eval  logic to calculate difference between acknowleged_time minus Triggered_time and checking if the difference is > 10 mins . If it is, then eval SLA_status = breached else SLA_Status= met .   Likewise for resolved_time as well. I am assuming a lot of you ES folks must be doing this kind of SLA metrics tracking some way or other.  Kindly assist. Thanks in advance
We have notable events for when a user is created on multiple devices. Most of them are expected for when devices are imaged.  I want to use erex to create a suppression for like accounts. They typ... See more...
We have notable events for when a user is created on multiple devices. Most of them are expected for when devices are imaged.  I want to use erex to create a suppression for like accounts. They typically have the same beginning and are followed by 2 numbers. Example would ituser23, ituser24, ituser25.  I am using the search below for testing index=notable source="Endpoint - Anomalous User Account Creation - Rule" | erex user examples="ituser23, ituser24, ituser25"  I am still getting user accounts that are unrelated such as phone or tablet. When I look at the recommended regex it seems like it is not being granular enough.
First it isn't clear to me what units the various timeseries in a metric are returning.  It feels pretty arbitrary to me.  I was wondering if perhaps the ns portion of a metric stood for nanoseconds?... See more...
First it isn't clear to me what units the various timeseries in a metric are returning.  It feels pretty arbitrary to me.  I was wondering if perhaps the ns portion of a metric stood for nanoseconds?  That would at least make it this more clear.  But I suppose it could also stand for namespace.
Can't I just search an IP within Splunk with no syntax, just 192.15.10.1 and if there is any data or this IP is simply being accessed by one of our users, then I should be able to see it. Are there... See more...
Can't I just search an IP within Splunk with no syntax, just 192.15.10.1 and if there is any data or this IP is simply being accessed by one of our users, then I should be able to see it. Are there better ways to find it?  Overall I want to see if two specific IPs are connecting to Splunk, if so, then broaden the search. 
Hello everyone, I have a field named SQL_NAME with values as per below (I'm writing two of them): #1(8):EMEMEB #2(14):8/3/2022 0:0:0 #3(13):Ememe Behe #4(3):409 #5(0): #1(6):TSUDE #2(14):8/1/202... See more...
Hello everyone, I have a field named SQL_NAME with values as per below (I'm writing two of them): #1(8):EMEMEB #2(14):8/3/2022 0:0:0 #3(13):Ememe Behe #4(3):409 #5(0): #1(6):TSUDE #2(14):8/1/2022 0:0:0 #3(10):Tugu Sude #4(3):411 #5(0): and I want to extract two fields named user and name with their values in the bold strings above using regular expression. Any idea? Thank you in advance.
Hello,  i have the chart created as area chart shown in the attached screen shot.   if you could see the tooltip which is being shown is the value of the column at that point of time. I would ... See more...
Hello,  i have the chart created as area chart shown in the attached screen shot.   if you could see the tooltip which is being shown is the value of the column at that point of time. I would like to edit this tool tip and add some more info to the tooltip which should be taken from the third column and i dont want to show the third column in the chart. Output from the query  _time Evictions Hits 2022-07-26 10:03:30.864 0 0 2022-07-26 10:03:32.021 0 0 2022-07-26 10:03:33.184 0 0 2022-07-26 10:03:34.460 12803 131779 2022-07-26 10:03:35.812 24627 251059 2022-07-26 10:03:37.330 40209 404141 2022-07-26 10:03:38.979 57844 576308   I have changed the query to have the output as  _time Evictions Hits Reads 2022-07-26 10:03:30.864 0 0 0 2022-07-26 10:03:32.021 0 0 0 2022-07-26 10:03:33.184 0 0 0 2022-07-26 10:03:34.460 12803 131779 131779 2022-07-26 10:03:35.812 24627 251059 251059 2022-07-26 10:03:37.330 40209 404141 404141 2022-07-26 10:03:38.979 57844 576308 576308 2022-07-26 10:03:40.523 73288 727097 727097 2022-07-26 10:03:41.851 87340 859045 859045   I want to have the reads column added in the tooltip to just show under the tooltip  
how to query, When quota/spike arrest is close to being exceeded e.g. 80% of configured quota as set by spike arrest. quota limit = 600.
Hi We've installed TA-Akamai_SIEM on both a HF and SH. The API connections appear to be coming in fine, we get JSON data and on the SH, I can see the Dashboards populated correctly. However, if I s... See more...
Hi We've installed TA-Akamai_SIEM on both a HF and SH. The API connections appear to be coming in fine, we get JSON data and on the SH, I can see the Dashboards populated correctly. However, if I search the relevant index, data is still appearing in JSON format.  Reading the notes for this app, the Scripting I believe should kick in and convert the JSON to CIM compliant format, but that doesnt seem to be happening. I do have (thousands of) errors appearing relating to Java, but it seems to be the same error that pops up on other people's problems and doesnt give much of an insight.  08-04-2022 12:18:09.203 +0100 INFO ExecProcessor [3239918 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 08-04-2022 12:18:09.229 +0100 INFO ExecProcessor [3239918 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, end streamEvents 08-04-2022 12:18:09.229 +0100 ERROR ExecProcessor [3239918 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Splunk is running on 9.0.0 and Java on the HF appears to be OK, java -version returns    java version "1.8.0_333" Java(TM) SE Runtime Environment (build 1.8.0_333-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.333-b02, mixed mode)   Has anybody seen any similar problems to the above?   Thanks