All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi what would be the best way to check if after a user has been added to a group, they have not been removed from the same group within say 24 hours.  I currently have a search that provides a tabl... See more...
Hi what would be the best way to check if after a user has been added to a group, they have not been removed from the same group within say 24 hours.  I currently have a search that provides a table that shows group additions and group removals using winevent index. What is the best way to find events where there has been an addition but no removal for the same group and user added within 24 hours. I started to look at | transaction but I don't feel this is correct as I am interested if there has not been a removal after a time period. Failing this if anyone has an alternate solution to alert when a user has been added and not removed from a group within a time period that would be much appreciated. Thanks
Hello, I have a raw data that go like this   ... in[ 60: ]<3034> in[ 62: ]<10> in[ 62: ]<EC_CARDVER> ...     I want to extract the EC_CARDVER to a field name msg My rex is   | rex field=_raw... See more...
Hello, I have a raw data that go like this   ... in[ 60: ]<3034> in[ 62: ]<10> in[ 62: ]<EC_CARDVER> ...     I want to extract the EC_CARDVER to a field name msg My rex is   | rex field=_raw "(in)\[ 62: \]\<(?P<msg>)\>"   But it doesn't seem to catch on. How do I write to extract only the EC_CARDVER but not the 10 above it?
I have created a query to detect too much blocked traffic to one single destination.Somehow this doesn't work. Help me to resolve this. bin _time span = 5m as timespan | eval start time = strptime(... See more...
I have created a query to detect too much blocked traffic to one single destination.Somehow this doesn't work. Help me to resolve this. bin _time span = 5m as timespan | eval start time = strptime(connection_start_time,"%Y-%m-%d %H:%M:%S") |stats dc(D_IP)as num values (start_time)by src_ip | search num>3 | sort num desc I want to display the src ip,hostname,destination ip, count
Hi all, I am new to Splunk. Right now I am trying to make a table out of a log, which contains different fields like Level = INFO etc., there's a field      Log = {"objects":[object1, object2 ... See more...
Hi all, I am new to Splunk. Right now I am trying to make a table out of a log, which contains different fields like Level = INFO etc., there's a field      Log = {"objects":[object1, object2 ...], "info": "some strings", "id1": someInt, "id2": someInt} Log = {"objects":[object1, object2 ...], "info": "some other strings", "id1": someOtherInt, "id2": someOtherInt} Log = { "info": "some log strings"} Log = "some string"     I have tried a few rex and spath but it seems that it's not working well I would like to extract "objects" field by different "info", for example, I need objects from Log but sometimes I need objects from the first Log above, and sometimes I need them from second Log ( for different panels in dashboard), and the way to separate them is by using "info" And need to display objects in it in a chart under a column. Any help/hints are appreciated!
query: index=xxx host=xx sourcetype=xxx source=xxx |rex field = -raw "\MeasuStatus\:(?<Status>.*?)\|" |where isnotnull(Status) |eval Success=if(Status="0", "Done", null()) |eval Failed=if(Sta... See more...
query: index=xxx host=xx sourcetype=xxx source=xxx |rex field = -raw "\MeasuStatus\:(?<Status>.*?)\|" |where isnotnull(Status) |eval Success=if(Status="0", "Done", null()) |eval Failed=if(Status!="0", "notDone", null()) |stats count(Sucess) as SuccessC count(Failed) as FailedC count(Status) as overall |eval SuccessPerc=(SuccessC/overall) *100 |eval SucessPercentage=round(SucessPerc,2) |table SucessPercentage The above query is working fine, But i want to modify the query to run in less time because now it is taking more time to get the results. Can any one suggest.
One of our client recently performed a vulnerability scan on Splunk Enterprise 8.2.7 and they were found as vulnerable for Apache Spark package and Apache hive package : bin\jars\vendors\spark\3.0.1... See more...
One of our client recently performed a vulnerability scan on Splunk Enterprise 8.2.7 and they were found as vulnerable for Apache Spark package and Apache hive package : bin\jars\vendors\spark\3.0.1\lib\spark-core_2.12-3.0.1.jar  and  \bin\jars\thirdparty\hive_3_1\hive-exec-3.1.2.jar I see version 9.0 uses patched version of hive i.e 3.1.3 and does not use spark Did anyone else found this ??  
I am trying to set up anomaly detection based on the number of ModSecurity warnings in the log in real-time to indicate an attack. I set up an experiment in the MLTK with the query below and set up t... See more...
I am trying to set up anomaly detection based on the number of ModSecurity warnings in the log in real-time to indicate an attack. I set up an experiment in the MLTK with the query below and set up the alerting system. The app is used Mon-Fri, so we tried to account for the low traffic on Sat-Sun in the data model. Unfortunately, the alert is being triggered constantly. Can anyone assist with how to write the query or set up the experiment for my use case?    index="APP_NAME_production" source="<PATH TO LOG>/modsec_audit.log:*" "ModSecurity: Warning" | bin _time span=15m | stats sum(linecount) as total_lines by _time | eval HourofDay=strftime(_time,"%H") | eval DayofWeek=strftime(_time, "%A")
I just installed this app and found it simple to setup...but I must be doing something wrong. I've created Trap information on my two UPS devices and haven't had any luck bringing them into Splunk. I... See more...
I just installed this app and found it simple to setup...but I must be doing something wrong. I've created Trap information on my two UPS devices and haven't had any luck bringing them into Splunk. I enabled SNMP, all versions and then I added the IPs for the Traps to point to and included my splunk cloud DNS name, Deployment IP, HF and UF and haven't seen anything come in.  It also says the default sourcetype it has is snmp_ta Activation Key * Using 14 day free trial Log Level INFO  SNMP Mode Listen For Traps (I've also tried DEBUG) SNMP Version 2C I left everything else blank except  SNMP Trap listener settings I put the IP address of the UPS I'm trying to get the information from. 
I saw that a new version of this add-on was released to support OAuth. The instructions for setting up the Client ID is truncated: "The Reporting Web Service should now appear in the list of applic... See more...
I saw that a new version of this add-on was released to support OAuth. The instructions for setting up the Client ID is truncated: "The Reporting Web Service should now appear in the list of applications that your app requires permissions for <blank" I added ReportingWebService.Read.All to the Client ID I already use for other O365 logs, and configured the new TA but this still gives me a 401 error. Are there additional premissions required?
So I have migrated to Splunk Cloud, but still have a Deployment server, UF, and HF. How do I find out what my IP is for Splunk Cloud?  I'd like to be able to send Trap information directly there. Es... See more...
So I have migrated to Splunk Cloud, but still have a Deployment server, UF, and HF. How do I find out what my IP is for Splunk Cloud?  I'd like to be able to send Trap information directly there. Especially since the UPS keeps saying failed to find DNS name.  Thank you. 
Hi Guys, I want to know How can i configure a multiple drilldown on a tablet in Dashboard Studio. I need to put different links in Row and collums, but in Dashboard Studio haver only one way.
Hello, I just started a new position where I've inherited management of large queries that need to be updated periodically. They typically involve having regexes matching on a field and applying a la... See more...
Hello, I just started a new position where I've inherited management of large queries that need to be updated periodically. They typically involve having regexes matching on a field and applying a label to them.  One involves a huge case statement: | eval label=case(match(field,"regex1",label1),match(field,"regex2",label2),match(field,"regex3",label3)...) The regex is updated regularly, hence me wanting to make this more manageable. My first thought was to use a lookup table with the regex & label but I'm open to other suggestions. I did find https://community.splunk.com/t5/Splunk-Search/How-do-I-match-a-regex-query-in-a-CSV and have been able to use regex in the lookup table with a search that was suggested in the solution: | where [| inputlookup regexlookup.csv | eval matcher="match(subject,\"".regex."\")" | stats values(matcher) as search | eval search=mvjoin(search. " OR ")] But I'm wondering how to also apply the label to the results with a lookup like this: regex,label regex1,label1 regex2,label2    Thanks in advance.
Indexers are getting blocked periodically throughout the day, causing our heavy forwarders to stop forwarding data. -- Error on HF -- 07-25-2022 09:22:55.982 -0400 WARN AutoLoadBalancedConnectionSt... See more...
Indexers are getting blocked periodically throughout the day, causing our heavy forwarders to stop forwarding data. -- Error on HF -- 07-25-2022 09:22:55.982 -0400 WARN AutoLoadBalancedConnectionStrategy [5212 TcpOutEloop] - Cooked connection to ip=10.3.13.4:9997 timed out 07-25-2022 09:27:14.858 -0400 WARN TcpOutputFd [5212 TcpOutEloop] - Connect to 10.3.13.4:9997 failed. Connection refused 07-25-2022 10:58:06.973 -0400 WARN TcpOutputFd [5034 TcpOutEloop] - Connect to 10.3.13.4:9997 failed. Connection refused Whenever this was found on a HF the indexer with the IP had closed the port 9997 and indexer's queues were full.   -- Log messages on the indexer: 07-26-2022 08:51:28.857 -0400 ERROR SplunkOptimize [37311 MainThread] - (child_304072__SplunkOptimize) optimize finished: failed, see rc for more details, dir=/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_70, rc=-4 (unsigned 252), errno=1 07-26-2022 08:56:51.759 -0400 INFO IndexWriter [37662 indexerPipe_1] - The index processor has paused data flow. Too many tsidx files in idx=_metrics bucket="/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_70" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.  
I'm trying to override the host metadata with a regex on source but it's not working as expected.  The events are arriving at the indexers with the forwarding server's hostname  I've referenced a few... See more...
I'm trying to override the host metadata with a regex on source but it's not working as expected.  The events are arriving at the indexers with the forwarding server's hostname  I've referenced a few similar issues from the community site in my attempt but I'm not having much luck.  Can anyone see what's wrong please?  In the below example the host metadata should be overridden with host1. Input [monitor:///opt/pyLogger/logs/host1] SHOULD_LINEMERGE = False sourcetype = network Props [network] TRANSFORMS-host=overridehost Transforms [overridehost] SOURCE_KEY = MetaData:Source DEST_KEY = MetaData:Host REGEX = (\w+)$ FORMAT = host::$1 The data is arriving at the indexers with the host set to the server the forwarder is running on.  
From time to time some searches or alerts are not running. Owners of the KB found those are in the savedsearches with some attributies missing including "search = ". Each time we found the issue fro... See more...
From time to time some searches or alerts are not running. Owners of the KB found those are in the savedsearches with some attributies missing including "search = ". Each time we found the issue from some objects we corrected them by recreating with the attributes but it keeps happening to many alerts/searches. We are using the version 8.2.1 with SHC of 12 members.
I am trying to build a trending dashboard of the count of tickets we get. With the below query, I am trying to display the increase/decrease in the count of tickets we got for each type for the past ... See more...
I am trying to build a trending dashboard of the count of tickets we get. With the below query, I am trying to display the increase/decrease in the count of tickets we got for each type for the past two weeks.   index=tickets Status IN ("Open", "Pending") earliest=-14D@w0 latest=@w0 | dedup ticketid | eval ticket_type=case(like(Tags,"%tag1%"),"Type1", like(Tags,"%tag2%") AND !like(Tags,"%tag1%"), "Type2", like(Tags,"%tag3%") AND !like(Tags,"%tag1%") AND !like(Tags,"%tag2%") , "Type3") | timechart usenull=f span=1w count by ticket_type   The problem is whenever we have more count(tickets) for the previous week, it shows the data with a "-" minus sign.  Question: 1. Not able to understand why a "-" would be there for a count.   2. Is there a way to suppress the "-" sign ?
Hello,  I'm just having a bit of difficulty differentiating between Splunk Enterprise, ITSI, SOAR, UBA, and Enterprise Security. It seems like they all do similar things.  Do they all work together?... See more...
Hello,  I'm just having a bit of difficulty differentiating between Splunk Enterprise, ITSI, SOAR, UBA, and Enterprise Security. It seems like they all do similar things.  Do they all work together? or would it be redundant to have all of these at the same time?
Hi,  I am new to Splunk so I was wondering if somebody could help me with this Problem:  I have deployed Splunk Standalone with the splunk-operator in my Kubernetes-Cluster. Now I would like to con... See more...
Hi,  I am new to Splunk so I was wondering if somebody could help me with this Problem:  I have deployed Splunk Standalone with the splunk-operator in my Kubernetes-Cluster. Now I would like to connect to a MS SQL Express DB with DB Connect. The installation of DB Connect works fine, but when I want to configure DB Connect, I need to give the Task Server a valid JRE path. The Standalone Version has no Java installed, so there is no path I can refer to. How can I install Java on the Splunk-Standalone? I found this in the docs, but I don't understand quiet how to do it this way ... (https://github.com/splunk/splunk-ansible/blob/develop/docs/advanced/default.yml.spec.md) Best regards 
Is there a way to use multivalue fields in SOAR? I have not been able to find a good article on how to do this. We have a few sets of logs that use multivalue fields. By default the SOAR export ap... See more...
Is there a way to use multivalue fields in SOAR? I have not been able to find a good article on how to do this. We have a few sets of logs that use multivalue fields. By default the SOAR export app will create individual artifacts with only 1 value in a field if the event has multiple values in a field. I know you can turn on the option in the SOAR export app to send all values as a mutivalue field to SOAR but the issue is when the actions inside of a playbook sees the data in a multivalue field they usually fail.  Here is an example of how an Artifact looks when I send an event as a multi value field to SOAR. md5_hash: ["55df197458234c1b48fed262e1ed2ed9","55df1974e1b8698765fed262e1ed2ed9"] sha1_hash: ["b50e38123456ee77ab562816ab0b81b2ab7e3002","b50e3817d416ee77ab562816ab0b81b2ab7e3002"] sha256_hash:["b8807c0df1ad23c85e42104efbb96bd61d5fba97b7e09a9e73f5e1a48db1e27e","b8807c0df1ad23c81234504efbb96bd61d5fba97b7e09a9e73f5e1a48db1e27e"] domains: ["some1.domain1.com","some.domain.com"] urls:["https://domain.com/uri/uri/picture.png ","https://some1.domain1.com/uri/uri/uri/something.gif ","https://some2.domain2.com/uri/uri/uri/something.gif ","https://some3.domain3.com/uri/uri/uri/something.gif.gif "] Here is an example Error if I were to try and get the IPs of the domains field Error Code: Error code unavailable. Error Message: None of DNS query names exist: ['some1.domain1.com',\032'some.domain.com']., ['some1.domain1.com',\032'some.domain.com'].localdomain. type = A domain = ['some1.domain1.com','some.domain.com'] I believe the playbook is seeing the data in the field as a single set of data and not a list so it is literally querying the A records for "['some1.domain1.com',\032'some.domain.com']" and not "some1.domain1.com" and "some.domain.com" How can I split these multi value fields up so that the playbook runs them as a for loop and outputs the results as one block of data 
Prior to upgrading to Splunk Enterprise 9.0 (we were on 8.2.6), when creating or editing a role, the indexes tab had a full list of our indexes. After the upgrade, existing roles still show the check... See more...
Prior to upgrading to Splunk Enterprise 9.0 (we were on 8.2.6), when creating or editing a role, the indexes tab had a full list of our indexes. After the upgrade, existing roles still show the checked indexes, but are missing the other available indexes. When creating a new role almost all indexes are missing from the list. We are running a SHC and Index cluster. I have seen this issue in the past, and we had to deploy a list of our indexes to our SHC. Other possible fix is to allow (All non-internal indexes) and add Restrictions. Anyone else have this issue or know of a fix?