All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi All recently we created MIDC in our environment to track user journey trough our applications, the midc is created for specific tier(40 nodes) which have around 150 BTs registered. and when we c... See more...
hi All recently we created MIDC in our environment to track user journey trough our applications, the midc is created for specific tier(40 nodes) which have around 150 BTs registered. and when we check the values in transaction analytics we observed that the data collector are dropping a lot of MIDC values for the same BT. my questions here: 1- is appdynamics transaction analytics collect data for each instance of BT or select randomly based on the performance of BT instance 2- if there is a lot of missing data for specific data collector fields where to start troubleshooting to see the flow of the data from analytics agent and app-agent. PS: that data dropped not only for MIDC but also for HTTP data collection i.e sessionID
Hi , i have a index "otx"  and having field "indicator"  so i want to trigger alert if any IP address from "indicator" match to my ASA firewall logs where "dest_ip"  is field in ASA logs.  i am tryi... See more...
Hi , i have a index "otx"  and having field "indicator"  so i want to trigger alert if any IP address from "indicator" match to my ASA firewall logs where "dest_ip"  is field in ASA logs.  i am trying belong query. index=asa | join dest_ip [search sourcetype="otx:indicator" type=IPV4 | fields indicator | rename indicator as dest_ip] Thanks shashi
Hello, I have a .json that contains any multivalue fields. I would like to avoid that any multivalue field be indexed, because It contais a lot of data that I want to avoid index. Is there any way... See more...
Hello, I have a .json that contains any multivalue fields. I would like to avoid that any multivalue field be indexed, because It contais a lot of data that I want to avoid index. Is there any way to do It? I have try other options like replace all multivalue text by a character, with the follow command ( | rex field="changelog.histories{}.history" mode=sed "s/(^.+)/x/g" )in a search, and I am able to change: asderdas asd34sdas asdaserwerw by  x x x although I have tried with SEDCMD-xyz = s/"changelog.histories{}.history"=^.+/x/g in "Add data"-"Set sourcetype" window- Advanced and I don't achieve It. I would like to avoid index "changelog.histories{}.history" or change: asderdas asd34sdas asdaserwerw by x (changing all multivalue values for only a character(x for example) Is It possible? Thanks a lot and regards Daniel
Hi I Send estreamer from FMC 6.7  to Splunk 8.1 every things work fine  except that I cant send original client ip address and HTTP response code. extended option is enable in estreamer.conf: "arc... See more...
Hi I Send estreamer from FMC 6.7  to Splunk 8.1 every things work fine  except that I cant send original client ip address and HTTP response code. extended option is enable in estreamer.conf: "archiveTimestamps": true, "eventExtraData": true, "extended": true, "impactEventAlerts": true, "intrusion": true, "metadata": true, "packetData": true   please help me   thanks
I have created two lists from stats-list and stats-values. These are called Lookup_Vals(from lookup table's Lookup_procedures field) and Originals(from splunk search Procedure_Name field). I want a n... See more...
I have created two lists from stats-list and stats-values. These are called Lookup_Vals(from lookup table's Lookup_procedures field) and Originals(from splunk search Procedure_Name field). I want a new list that is made up of values in the Lookup_Vals list but NOT in the Originals list. I've tried using match command but that just tells me if the lists are the same or not. I've also tried using "List(eval(if(IN(Lookup_procedures,Originals),"Match","No Match"))) as Missing" but that doesn't seem to work either. The if statement resolves to false every time even though I know the lists are mostly the same.  Full search:      |search index here | fields Procedure_Name,Process_Name,Activity_Code, UpdatedDate | eval Procedure_Name=coalesce(Process_Name, Procedure_Name) | stats count by Procedure_Name | append [|inputlookup chubDashboardProcedures.csv |rename 1.0_Procedures as Lookup_procedures| eval count=0|fields Lookup_procedures count] | stats sum(count) as total, List(Lookup_procedures) as Lookup_Vals, Values(Procedure_Name) as Originals, Values(eval(if(IN(Lookup_procedures,Originals),"Match","No Match"))) as Missing     I've also true using mvjoin(Originals, ",") command on the Originals but that doesn't seem to help either. 
Hi Team,   I am trying to create a search which says  If myField= xyz, then i need to show id , salary ,department fields in table If myField = abc then need to show location, address, phone fiel... See more...
Hi Team,   I am trying to create a search which says  If myField= xyz, then i need to show id , salary ,department fields in table If myField = abc then need to show location, address, phone fields in tabke Similarly if myField = ddd then need to show age, ht, gender.. fields in table i was trying to use case , if statement but not sure how to get multiple fields in table based on condition....by using drop it would be easy as i can set condition and get the output , but want to do this in search..      
Hi, We are exposing our search heads' management port for API access to splunk and enabled mTLS. When our users are hitting the endpoint with certs with below chain format, splunk is throwing (SSL_E... See more...
Hi, We are exposing our search heads' management port for API access to splunk and enabled mTLS. When our users are hitting the endpoint with certs with below chain format, splunk is throwing (SSL_ERROR_UNKNOWN_CA_ALERT) - Peer does not recognize and trust the CA that issued your certificate. error. Our users' certs are in below format   <entity> <intermediate 1> <intermediate 2>   The Root CA cert that signed the <intermediate 2> cert is in our cacerts.pem file configured as below in server.conf sslRootCAPath=/path/to/cacerts.pem The expectation is splunk should establish the chain and since the Root CA that signed the last intermediate cert is present in its trust store, it should accept the client connection. This is not happening. However, If we put the both above intermediate certs  also in the cacerts.pem file along with Root CA cert, that establishes a successful connection. Why is this happening? Shouldn't splunk be able to build the chain with user provided cert and establish a connection without having intermediates in its truststore? Is there any parameter configuration to achieve this behaviour?
I have set up universal forwarder to collect the all syslog data to splunk. All the settings are in place 1. Connectivity between the servers (syslog UF to Splunk) is ok 2. Required ports are open ... See more...
I have set up universal forwarder to collect the all syslog data to splunk. All the settings are in place 1. Connectivity between the servers (syslog UF to Splunk) is ok 2. Required ports are open 3. All the configuration on syslog server and deployment server is ok. 4. Even after making the changes in inputs under deployment server app, i used to restart the app by GUI. 5. On syslog i getting continuous logs. However all the settings are in place, im not able to receive the continuous logs. Sometimes i receive the logs to splunk. but sometimes the logs are not getting received.
Hi, I am running into following issue when I try to monitor my k8s cluster using Appdynamics. I verified that my controller key is correct. Any other reason why I would run into this issue? Thanks,... See more...
Hi, I am running into following issue when I try to monitor my k8s cluster using Appdynamics. I verified that my controller key is correct. Any other reason why I would run into this issue? Thanks, Raghu [WARNING]: 2021-01-16 05:27:39 - agentregistrationmodule.go:352 - dev is not a valid namespace in your kubernetes cluster [INFO]: 2021-01-16 05:27:39 - agentregistrationmodule.go:356 - Established connection to Kubernetes API [INFO]: 2021-01-16 05:27:39 - agentregistrationmodule.go:68 - Cluster name: TEST [INFO]: 2021-01-16 05:27:39 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2021-01-16 05:27:39 - agentregistrationmodule.go:131 - Failed to send agent registration request: Status: 401 Unauthorized, Body: [ERROR]: 2021-01-16 05:27:39 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2021-01-16 05:27:39 - agentregistrationmodule.go:134 - Registration properties: {}
Hai, please I wanna ask how to accelerate to get timechart with datamodel from this query   | datamodel Intrusion_Detection IDS_Attacks search | search ("IDS_Attacks.src"="10.0.0.0/8" OR "IDS_Atta... See more...
Hai, please I wanna ask how to accelerate to get timechart with datamodel from this query   | datamodel Intrusion_Detection IDS_Attacks search | search ("IDS_Attacks.src"="10.0.0.0/8" OR "IDS_Attacks.src"="172.16.0.0/12" OR "IDS_Attacks.src"="192.168.0.0/16") AND ("IDS_Attacks.severity"="high" OR "IDS_Attacks.severity"="critical") | table _time,IDS_Attacks.category | timechart useother=`useother` count by IDS_Attacks.category   I have tried to use tstats but the data is not suitable because with tstats command there are some count data which are calculated to be just 1 event in so that timechart not clear, this tstats command I used before   | tstats allow_old_summaries=t count from datamodel=Intrusion_Detection.IDS_Attacks where ("IDS_Attacks.src"="10.0.0.0/8" OR "IDS_Attacks.src"="172.16.0.0/12" OR "IDS_Attacks.src"="192.168.0.0/16") AND ("IDS_Attacks.severity"="high" OR "IDS_Attacks.severity"="critical") by _time span=s IDS_Attacks.category | timechart useother=`useother` count by IDS_Attacks.category   thanks for your help before best regard
Hi All , We have 1 indexer, 1 SH and 1 DS . We are  planning for an OS upgrade (OS RHEL 6 to OL 7)  .All the these devices are on the Cloud . Can someone help me what are the steps to be taken from... See more...
Hi All , We have 1 indexer, 1 SH and 1 DS . We are  planning for an OS upgrade (OS RHEL 6 to OL 7)  .All the these devices are on the Cloud . Can someone help me what are the steps to be taken from Splunk perspective during this upgrade . What are the checks to be done and is there any pattern like first indexer , then DS and then SH to be followed . What to do in a fail over . Splunk is not getting upgraded as of now . We are keeping the same splunk version 7.3. I have already read this article   https://community.splunk.com/t5/Installation/What-s-the-order-of-operations-for-upgrading-Splunk-Ent...  but i am not upgrading splunk here . 
What's the meaning of this warning message? I didn't enable index reduction at all. "Search on most recent data has completed. Expect slower search speeds as we search the reduced buckets."
I will like to "Export" all configured "Alerts" in a particular "App" with all configured settings including the actions when the alert is triggered. I have tried some "rest" and "searches" listed o... See more...
I will like to "Export" all configured "Alerts" in a particular "App" with all configured settings including the actions when the alert is triggered. I have tried some "rest" and "searches" listed on the "Community" without success.    
Greetings Splunkers, I've been banging my head against the keyboard to try and resolve this comparison issue, I know there's a way to do it I just can't seem to figure it out. The issue I'm try... See more...
Greetings Splunkers, I've been banging my head against the keyboard to try and resolve this comparison issue, I know there's a way to do it I just can't seem to figure it out. The issue I'm trying to resolve is determining if a user has a conflict of interest with regard to roles their user has been assigned, and what management deems inappropriate, segregation of duties basically.   I get my list of conflicting roles from a lookup.   - lookup SoD.csv Department as Department OUTPUTNEW Conflicting_Roles, Justification So if USER_A is within the Finance department and assigned roles A, B, and C   OR    A, B, and Z for example it would be a conflict because they still have A & B.  I can get a proper result when I manually input the roles within a case statement. Conflict=case(Assigned_Role=Role_B AND Assigned_Role=Role_C, "Conflict",1=1,"No Conflict") However, given the conflicting roles change based on the department, it will be easier, in the long run, to maintain a lookup for conflicting roles than continuously updating the query.  I've also tried the following but none seem to work: Conflict=case(in(Roles,Conflicting_Roles), "True",1=1, "False") Conflict=if(isnotnull(mvfind(Roles,Conflicting_Roles)),"Matched","Not Matched") Conflict=if(match(Roles,Conflicting_Roles),"Conflict","No Conflict") None seem to work as I need though, any thoughts or suggestions are greatly appreciated. Thank you
We have Multiple apps that generate logs and there format is little different .  Splunk currently just shows that field as just a string ex:  { id:1, log:  " {k1:v1,K2:v2}" } The K1 and K2 are ... See more...
We have Multiple apps that generate logs and there format is little different .  Splunk currently just shows that field as just a string ex:  { id:1, log:  " {k1:v1,K2:v2}" } The K1 and K2 are not searchable. log can have different format messages but we want all of them to be searchable.  Thanks  
I have email logs within index=Email and suspicious domain connections within index=Security. The field name within Security = domain (values look like "website.com") The field of interest in Email... See more...
I have email logs within index=Email and suspicious domain connections within index=Security. The field name within Security = domain (values look like "website.com") The field of interest in Email = URL (values look like https://www.corp.website.com/page1/page2/etc:443) I need to search index=Email and do a sub-search within index=Security and only return results if domain is contained within URL. I tried things like:   index=Email [search index=Security | where like(URL, "%"."domain"."%")]   Anyone that can help me you will make my week!  Thanks for your time.    
The requirements is to find the event_A and event_B such that There is some event A's before the event_B, and the event_A’s TEXT field and the event_B’s TEXT field have the first character identica... See more...
The requirements is to find the event_A and event_B such that There is some event A's before the event_B, and the event_A’s TEXT field and the event_B’s TEXT field have the first character identical, and the second characters satisfy the condition: the event_B’s TEXT’s 2nd character in numerical value is equal to the event_A’s corresponding field’s 2nd character, or event_B’s is 1 plus, or 1 minus of the event_A’s. It is after some event_A satisfying condition 1, with CATEGORY value “ALARM” and not after such event_A with CATEGORY value “CLEARED”, or It is after some event_A satisfying condition 1, with CATEGORY value “CLEARED”, but the event_B’s _time is within 60 minutes of the _time of event_A (CATEGORY=CLEARED) Here are some sample data: _time CATEGORY TYPE TEXT 2020-12-29T05:20:32.710-0800 ADVISORY event_B K35JB 2020-12-29T05:37:54.462-0800 ADVISORY event_B A05KM 2020-12-29T05:57:50.164-0800 ADVISORY event_B K25CD 2020-12-29T05:59:06.004-0800 ALARM event_A R20-A 2020-12-29T05:59:24.635-0800 ALARM event_A K35-E 2020-12-29T05:59:37.200-0800 ALARM event_A C15 2020-12-29T06:00:24.470-0800 CLEARED event_A R20-A 2020-12-29T06:00:40.415-0800 CLEARED event_A K35-E 2020-12-29T06:08:09.945-0800 ADVISORY event_B R65AG 2020-12-29T06:14:24.740-0800 ADVISORY event_B K35JB 2020-12-29T06:14:43.988-0800 ADVISORY event_B K45JB 2020-12-29T06:56:44.642-0800 ADVISORY event_B A77MD 2020-12-29T06:59:42.745-0800 ADVISORY event_B C87AB 2020-12-29T07:30:39.080-0800 ADVISORY event_B M97AF 2020-12-29T08:39:26.008-0800 ADVISORY event_B K25BA 2020-12-29T09:46:48.175-0800 ADVISORY event_B C25EG Here is the illustration with the above sample data (with comment after # ) _time CATEGORY TYPE TEXT # all the event_B without event_A before are eliminated 2020-12-29T05:59:06.004-0800 ALARM event_A R20-A # expecting event_B with TEXT with prefix Ri where i = 1, 2, 3 2020-12-29T05:59:24.635-0800 ALARM event_A K35-E # expecting event_B with TEXT with prefix Ki where i = 2, 3, 4 2020-12-29T05:59:37.200-0800 ALARM event_A C15 # expecting event_B with TEXT with prefix Ci where i = 0, 1, 2 2020-12-29T06:00:24.470-0800 CLEARED event_A R20-A # only expecting event_B with TEXT with prefix Ri where i = 1, 2, 3 with _time < 2020-12-29T06:00:24.470-0800 + 60 minutes 2020-12-29T06:00:40.415-0800 CLEARED event_A K35-E # only expecting event_B with TEXT with prefix Ki where i = 2, 3, 4 with _time < 2020-12-29T06:00:40.415-0800 + 60 minutes 2020-12-29T06:08:09.945-0800 ADVISORY event_B R65AG # to be eliminated, not expected, as R6 does not match Ri, i=1, 2, 3 2020-12-29T06:14:24.740-0800 ADVISORY event_B K35JB # kept, as K3 matched the expected prefix, and within the time windows 2020-12-29T06:14:43.988-0800 ADVISORY event_B K45JB # kept, as K4 matched the expected prefix, and within the time windows 2020-12-29T06:56:44.642-0800 ADVISORY event_B A77MD # to be eliminated, not expected, as A7 does not match any of the expected prefix 2020-12-29T06:59:42.745-0800 ADVISORY event_B C87AB # to be eliminated, not expected, as C8 does not match Ci, i=0, 1, 2 2020-12-29T07:30:39.080-0800 ADVISORY event_B M97AF # to be eliminated, not expected, as M9 does not match any of the expected prefix 2020-12-29T08:39:26.008-0800 ADVISORY event_B K25BA # to be eliminated, not expected, as its _time is beyond the expected window 2020-12-29T09:46:48.175-0800 ADVISORY event_B C25EG # kept, as C2 matched the expected prefix, and there is no time window limit for the prefx C2 I cannot wrap my head to figure a solution with Splunk query. I could only find a solution when there is only one event_A expecting the corresponding event_B, using streamstats to keep of track the only one expecting event_A’s TEXT prefix, and _time to scan for the satisfying event_B, but once there are multiple event_A’s expecting with different TEXT prefixes and _time’s, then I cannot find a way to remember and perform the scan for the multiple event_A’s expectations. With a conventional programming language, say Python, I’ll keep track of the union of expectant prefixes, and time windows, and scan the events against such history state. Could you kindly help me! Thanks in advance!
I'm using the Saas On Prem controller and have everything showing from both the java agent and machine agent except the heap and garbage collection page under the memory tab is just blank.... Nothing... See more...
I'm using the Saas On Prem controller and have everything showing from both the java agent and machine agent except the heap and garbage collection page under the memory tab is just blank.... Nothing loads. I've tried various settings, 2 different JVM's and still can't get anything to load. Anyone have any ideas what could be the issue (configuration issue or issue on UI)?
Could someone tell me how can I make a search query to report of all login attempts on OS-level and Splunk level. Thank you, Alex