All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  We have a scenario where we have three different events that should combine together based on Event ID.   Example  Event 1 Fields:  Hostname  Unique_ID  Has_Vulnerabilities   Event 2 Fi... See more...
Hi,  We have a scenario where we have three different events that should combine together based on Event ID.   Example  Event 1 Fields:  Hostname  Unique_ID  Has_Vulnerabilities   Event 2 Fields:  Scan_Date  Hostname_Unique_ID  Vulnerability_Id   Event 3 Fields:  Vulnerability_id  Description  Start_Date  …  What we are trying to do is, when I click on “Event 1 Unique_ID” to get all Vulnerabilities for Selected Host for “Event 2” enriched with selected data from “Event 3”.  All three events are in the same index but different sourcetype.   What is the best approach here? Subsearch seems slow If I go “Event 2” first and then filter to “Event 1”. I want another way around.  
I have a sourcetype that I have been trying to break my logs apart, but I keep getting:  Failed to parse timestamp:  Here is an example: [ logs ] CHARSET=UTF-8 EVENT_BREAKER_ENABLE=true LINE_BREA... See more...
I have a sourcetype that I have been trying to break my logs apart, but I keep getting:  Failed to parse timestamp:  Here is an example: [ logs ] CHARSET=UTF-8 EVENT_BREAKER_ENABLE=true LINE_BREAKER=([\r\n]+)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\,\d{3} MAX_EVENTS=135000 MAX_TIMESTAMP_LOOKAHEAD=23 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3N TRUNCATE=50000 TZ=America/New_York disabled=false pulldown_type=true   The logs look like they are broken correctly, but I still keep getting the error about the timestamp. Here is an example of the logs: 2022-04-25 11:28:17,743 ERROR [148] Method:C1222.MessageProcessor.ProcessResponseMessage -- String[] {Unexpected Exception: Internal Error - Unable to find Endpoint by ApTitle. - ApTitle: 2.16.124.113620.1.22.0.1.1.64.5541482OldDeviceAddress: x.xx.xxx.xxxxxx.x.xx.x.x.x.xx.xxxxxxx, Internal Error - Unable to find Endpoint by ApTitle.} Itron.Ami.Common.Logging.AmiException: Internal Error - Unable to find Endpoint by ApTitle. 2022-04-25 11:28:17,759 ERROR [148] Method:C1222.MessageProcessor.ProcessResponseMessage -- Unexpected System Exception: AmiException - Internal Error - Unable to find Endpoint by ApTitle. received - contact Application manager.
Hi, all   ・The log acquisition interval is 60 seconds ・When request_timeout is longer than the default 60 seconds Is there a possibility that log acquisition operations are duplicated as de... See more...
Hi, all   ・The log acquisition interval is 60 seconds ・When request_timeout is longer than the default 60 seconds Is there a possibility that log acquisition operations are duplicated as described below?      Example: When the retrieval interval is 60 seconds and the timeout is 90 seconds                 06:00:00 Acquisition Sample-A Starts       06:01:00 Acquisition Sample-B Starts       06:01:20 Acquisition Sample-A Ends  
I am running following query  where in the last I would like to fetch value of "Client" key from json and count all such clients. My query goes as follow: QUERY | rex ".*\"Client\":\"(?<Client>.*)\"... See more...
I am running following query  where in the last I would like to fetch value of "Client" key from json and count all such clients. My query goes as follow: QUERY | rex ".*\"Client\":\"(?<Client>.*)\"," | stats count byClient The events in query will definitely  has json as the one of the key, but order of the key may change. This extraction of Client from json is not working and I am getting Client as null .What is the problem here.My event looks as follow   Event type 1:       request-id : ABC Executing following method: Class.RestClass ::: with values: { "d1": "EU", "sn": "sn", "entityType": "USER", "email": "test@gmail.com", "id": [ "123" ], "Client": "TEST", "time": "2020-01-01T01:01:01Z", "List": [ { "Type": "Items1", "value": "-1", "match": "NO" } ] }     Event type 2:     request-id : 234 Execute something ::: with param-values: { "d1": "JP", "sn": "sn", "type": "USER", "user": "test1@gmail.com", "id": [ "123" ], "source": "S1", "Client": "test_client", "initiate": "init_Name", "mode": "Test", "t1": "", "t2": "", "auto": true, "list": [ { "type": "type_count", "value": "-1", "creteria": "skip" } ] }​     How can I correct my query to get the correct results:.
Hello Everyone, We are deploying EDR agents to all servers in our environment but I wonder if EDR agent creates any issue on Splunk components like Indexer,SH or Heavy forwarders. Has anyone instal... See more...
Hello Everyone, We are deploying EDR agents to all servers in our environment but I wonder if EDR agent creates any issue on Splunk components like Indexer,SH or Heavy forwarders. Has anyone install edr agent into the Splunk components which runs on centos 7 and have any problem? Kind Regards 
We built a custom app and deployed it in splunk.  It is writing logs in splunk/var/log/appname/appname.log.  I would like to find a way handle the logs natively within splunk, possibly using log.cfg ... See more...
We built a custom app and deployed it in splunk.  It is writing logs in splunk/var/log/appname/appname.log.  I would like to find a way handle the logs natively within splunk, possibly using log.cfg to roll the log at a certain size as well as retention.  I added a stanza to log.cfg in an attempt to manage this log but splunk doesnt appear to care about the added configs.  Anyone use this file in such a way?  Is it even possible?  Below is a snippet of the config I added to log.cfg for the app.     appender.appname=RollingFileAppender appender.appname.fileName=${SPLUNK_HOME}/var/log/appname/appname.log appender.appname.maxFileSize=25000000 # default: 25MB (specified in bytes). appender.appname.maxBackupIndex=5 appender.appname.layout=PatternLayout appender.appname.layout.ConversionPattern=%d{%m-%d-%Y %H:%M:%S.%l %z} %-5p %c - %m%n
Hello We recently migrated our CM to a new clean host. After migration almost everything is good but I have a few errors relating to masking changes and it seems to only be on a single host:     ... See more...
Hello We recently migrated our CM to a new clean host. After migration almost everything is good but I have a few errors relating to masking changes and it seems to only be on a single host:     04-26-2022 08:46:36.578 -0400 INFO CMRepJob - running job=CMChangeMasksJob guid=4769183B-D1C2-4906-AFE5-7A799E2A3B5D number-of-changes=30 genid=91958 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~220~4C96BDC6-0710-452F-9514-50C631A94286 transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 WARN CMMaster - mask change failed, reverting back bid=server_oracle~239~FBCEF346-BB10-406D-BD87-087A1DC6F5BF mask=0 searchState=Searchable status=Complete 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~239~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 WARN CMMaster - mask change failed, reverting back bid=windows~29197~FBCEF346-BB10-406D-BD87-087A1DC6F5BF mask=0 searchState=Searchable status=Complete 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows~29197~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 WARN CMMaster - mask change failed, reverting back bid=windows_events~3932~7AD6523B-A4F1-4B0E-8C77-511FD5FA2286 mask=0 searchState=Searchable status=Complete 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows_events~3932~7AD6523B-A4F1-4B0E-8C77-511FD5FA2286 transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 INFO CMMaster - event=commitGenerationFailure pendingGen=91958 requesterReason=changeBucketMasks failureReason='event=checkDirtyBuckets first unmet bid=cim_modactions~196~5678E53C-8999-4C69-8032-C55BDB86745E' 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows_events~3932~7AD6523B-A4F1-4B0E-8C77-511FD5FA2286 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows~29197~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~239~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~220~4C96BDC6-0710-452F-9514-50C631A94286 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~215~F448A588-91B9-47D9-99DB-B1CE27CA51AA transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~203~7423238A-3907-4BA1-A8A6-8A9A126A6B21 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~49~BB39BC9E-D7DA-4934-8D4A-FC7DD9C982B4 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~2~A98C1984-B48A-4B58-8D83-B6D1FAA01F08 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_ad~20~AC4D9A8B-995F-4043-BBE1-1FD61BFA3BEB transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:45.718 -0400 INFO CMMaster - event=commitGenerationFailure pendingGen=91958 requesterReason=changeBucketMasks failureReason='event=checkDirtyBuckets first unmet bid=cim_modactions~196~5678E53C-8999-4C69-8032-C55BDB86745E'         these issues are the only thing keeping our cluster from being completely migrated/fixed. Thanks for the help! Todd Waller
Hi All, In my dashboard, I have edit data option. For few multiselect input option the previous value is null, on edit when I select any new value I want to remove that null value from multiselec... See more...
Hi All, In my dashboard, I have edit data option. For few multiselect input option the previous value is null, on edit when I select any new value I want to remove that null value from multiselect data. I am using JavaScript to add/ edit records from UI. could you please help to deal with the null values. to remove null values on selection of new data. Thanks!
Hi: I have logs that is delimited by ||. I would like to extract nth value from each log and group them by value and count.  I am fairly new to Splunk. This is how far I have gotten.      index... See more...
Hi: I have logs that is delimited by ||. I would like to extract nth value from each log and group them by value and count.  I am fairly new to Splunk. This is how far I have gotten.      index=<index> INSERT OR UPDATE | eval fields=split(_raw,"||") | <WHAT DO I NEED HERE> | stats count by <field_value> | sort -count     My data     INSERT||"test Identifier"||"hostname"||"192.168.2.1"||"This is a test log"||....          
Hi All, Is there a way to determine how much data agents send to AppDynamics? Regards Charan ^ Post edited by @Ryan.Paredez for clarity and improved title for Searchability
Hi SMEs,   I need to configure UF to restrict not to collect logs older than X Days. Is it feasible than how? Also what is the best practice for no of days to configure here?   Many thanks 
Hi,   Just started using the  Splunk Add-on for Microsoft Security and found out that in ms_security_utils.py on line 137 and 138 request.compat.quote_plus is used. However that gives a incorrect f... See more...
Hi,   Just started using the  Splunk Add-on for Microsoft Security and found out that in ms_security_utils.py on line 137 and 138 request.compat.quote_plus is used. However that gives a incorrect format when the client_id or client_secret has special caracters init like + or =   , it will replcae them with %3 and %5 To get the plugin working I just removed request.compt.quote_plus Kind regards, Arnoud
  Hello All !, Kindly help me to find a solution for this. I need to whitelist the list of hosts ( the host count is >1220 and may add further) from all alerts.  Field name for host is varying ... See more...
  Hello All !, Kindly help me to find a solution for this. I need to whitelist the list of hosts ( the host count is >1220 and may add further) from all alerts.  Field name for host is varying as per correlation search. I have been trying the below options for somedays: 1. Upload the list as Lookup table and whitelist through lookup in all correlation rule (which will cause retroactive alerts) 2. Suppression Rule - Since the host field name is different for each rule, i need to write the suppression rule for each correlation rule. 3. Single Suppression rule - Am not clear how to get the values of hosts in all correlation searches and map into a single field, and then search the values. Currently am trying to write a query to get the host values from `notable` and compare the values.  `notable` | fillnull value=0 Hostname,dest_host,nt_host,Computer_Name | eval whitelist_host=if(Hostname!=NULL, Hostname, if(dest_host!=NULL, dest_host, if(nt_host!=NULL, nt_host,Computer_Name))) | table whitelist_host search_name Hostname dest_host nt_host Computer_Name | dedup whitelist_host `notable` | eval gifted_host=coalesce(coalesce(Hostname,dest_host),nt_host) | table gifted_host | dedup gifted_host Please let me know for any suggestions or if we have any other option.
There is a way to modify HTML page using Splunk interface?  I uploaded an HTML on Splunk file and if I want to modify it I have to do it in local and then reupload it or can I modify it directly fo... See more...
There is a way to modify HTML page using Splunk interface?  I uploaded an HTML on Splunk file and if I want to modify it I have to do it in local and then reupload it or can I modify it directly form Splunk interface?    Thanks in advice and sorry for my bad English  
I created an HTML page with css and java script inside. I uploaded correctly in Splunk (I can see the HTML) but seems that java script isn't working but when I start the HTML page in local it works p... See more...
I created an HTML page with css and java script inside. I uploaded correctly in Splunk (I can see the HTML) but seems that java script isn't working but when I start the HTML page in local it works perfectly. Any idea that solves the problem?   Thanks in advance and sorry for my bad English  
Regex to get only the data cd ab.aaaa.asd.cd
Hey guys,   we use Heavy Forwarders as gateways to Splunk Cloud, so the servers which are logging do not send their logs directly into the internet. Now we want to use the splunk otel collector to ... See more...
Hey guys,   we use Heavy Forwarders as gateways to Splunk Cloud, so the servers which are logging do not send their logs directly into the internet. Now we want to use the splunk otel collector to send kubernetes logs. Is it possible to send the kubernetes logs at first to our heavy forwarders instead of directly into the cloud?   Thanks in advance m
Hey Guys. I have a input that is refusing to work. The input that doesnt work is this fortigate one: This one on the same syslog server works just fine: Check the app on the sy... See more...
Hey Guys. I have a input that is refusing to work. The input that doesnt work is this fortigate one: This one on the same syslog server works just fine: Check the app on the syslog server and both inputs look like the above, so they have been pushed fine from the deployment server. Nothing called fortigate is in splunk:   Recent Log files ARE populated and present on the syslogserver If i search for the host from the fortigate input the following shows up, which to me looks like it should be forwarding logs?  
Hello,  I have the following 2 events : 1st event :      { [-] dimensionMap: { [-] User type: Real users dt.entity.application_method.name: Application } dimensions: [ [-... See more...
Hello,  I have the following 2 events : 1st event :      { [-] dimensionMap: { [-] User type: Real users dt.entity.application_method.name: Application } dimensions: [ [-] APPLICATION_METHOD ] timestamps: [ [-] 1650966840000 1650966900000 1650966960000 1650967020000 1650967080000 1650967140000 1650967200000 1650967260000 1650967320000 1650967380000 1650967440000 ] values: [ [-] 0.47 0.67 0.37 0.45 0.44 0.57 0.48 0.47 0.69 0.70 0.40 ] }     2nd event :      { [-] dimensionMap: { [-] dt.entity.application_method.name: Application } dimensions: [ [-] APPLICATION_METHOD ] timestamps: [ [-] 1650966840000 1650966900000 1650966960000 1650967020000 1650967080000 1650967140000 1650967200000 1650967260000 1650967320000 1650967380000 1650967440000 ] values: [ [-] 18 27 23 19 17 21 24 30 13 10 5 ] }     I would like to bind each value of the 1st event to each value of the 2nd event. I tried some join commands using the timestamps as a common value, but it didn't work. At the end, I would like the following table (result = value1*value2) :       Timestamp                                  Value1                    Value2                        Result 1650966840000                            0.47                          18                              0.47*18 1650966900000                            0.67                          27                              0.67*27 1650966960000                            0.37                          23                              0.37*23 1650967020000                            0.45                          19                              0.45*19 1650967080000                            0.44                          17                              0.44*17 1650967140000                            0.57                          21                              0.57*21 1650967200000                            0.48                          24                              0.48*24 1650967260000                            0.47                          30                              0.47*30 1650967320000                            0.69                          13                              0.69*13 1650967380000                            0.70                          10                              0.70*10 1650967440000                            0.40                           5                                0.40*5 Thank you. Regards,
Hi Splunkers, today I'm facing a problem related to temporal sequence between a multisearch and a search, but let me introduce the context and explain better. In ES, I have to build a correlation s... See more...
Hi Splunkers, today I'm facing a problem related to temporal sequence between a multisearch and a search, but let me introduce the context and explain better. In ES, I have to build a correlation search that must verify 2 events in time order: 1. First, check if a trojan, backdoor or exploit is founded on a destination host, from some source. 2. Then, check is from the same source on the same destination a login and/or an account change is performed. Bonds: use datamodels (if possible) and avoid transaction. Now, I know that I can use: 1. Intrusion Detection for point 1. 2. Authentication and Change for point 2 Now, the search for point one is something like that: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Intrusion_Detection where IDS_Attacks.signature IN ("*trojan*","*backdoor*","*exploit*") by IDS_Attacks.dest, IDS_Attacks.src, IDS_Attacks.signature, index, host | `drop_dm_object_name("IDS_Attacks")` while, for point 2, due I have 2 different datamodels I builded it with a multisearch: | multisearch [| tstats summariesonly=true prestats=true fillnull_value="N/D" count from datamodel=Authentication where nodename="Authentication.Successful_Authentication" by index, host, Authentication.src, Authentication.dest | `drop_dm_object_name("Authentication")`] [| tstats summariesonly=true prestats=true fillnull_value="N/D" count from datamodel=Change where nodename="All_Changes.Account_Management" by index, host, All_Changes.src, All_Changes.dest | `drop_dm_object_name("All_Changes")` ] | stats count by src, dest, index, host | stats count values(host) as host, values(index) as inde by src, dest I tested both search separately and they work well. Now the point is: how to tell to Splunk that search 1 must trigger before search 2 without transaction?  I thougth about funcion  min(_time) max(_time) and the use of  eval to check is first time occurrence of block 2 is greater than last time occurrence of block 1, but I'm struggling about the correct use of this functions, because the field with time occurrence is always empty, so it's clear I'm wrong in my combined code. Consider for example the multisearch of block 2, where I tested the use of min: | multisearch [| tstats prestats=true fillnull_value="N/D" min(_time) as firstSuccess, count from datamodel=Authentication where nodename="Authentication.Successful_Authentication" AND by index, host, Authentication.src, Authentication.dest | `drop_dm_object_name("Authentication")`] [| tstats prestats=true summariesonly=true fillnull_value="N/D" min(_time) as firstSuccess, count from datamodel=Change where nodename="All_Changes.Account_Management" by index, host, All_Changes.src, All_Changes.dest | `drop_dm_object_name("All_Changes")` ] | stats count min(firstSuccess) as firstSuccess by log_region, log_country, src, dest, index, host The idea is to find the first occurrence in both search of multisearch with min(_time) and then, in the following stats, use min(firstSuccess) to find the smaller one between them; the search show me in output the required fields, except fisrtSuccess which is empty.