All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good Evening, The alert Splunk DoS via Malformed S2S Request has been constantly triggering on one specific system, but the universal fowarder on that machine is version 8.2.3.0 and our Splunk ES i... See more...
Good Evening, The alert Splunk DoS via Malformed S2S Request has been constantly triggering on one specific system, but the universal fowarder on that machine is version 8.2.3.0 and our Splunk ES is version 8.2.5. According to splunk this alert only affects version 7.3.8 and earlier, 8.0.0 - 8.0.8, and 8.1.0 - 8.1.2. Would there be another reason why this alert would trigger on one specific machine? Could certain processes cause this alert to trigger?
How do I configure deployment server where I have a main or master server that has apps it pushes to its clients and will push to a secondary server behind a firewall.   Then that server pushes apps ... See more...
How do I configure deployment server where I have a main or master server that has apps it pushes to its clients and will push to a secondary server behind a firewall.   Then that server pushes apps  to its set of clients Clients...
I have a query that returns a table of extracted IDs: index=my_index | rex field=_raw "ID=\[(?<id>.*\]\[.*\]" | table id I simply need to search the results of the above query under a different ind... See more...
I have a query that returns a table of extracted IDs: index=my_index | rex field=_raw "ID=\[(?<id>.*\]\[.*\]" | table id I simply need to search the results of the above query under a different index, then return a stats count by a field from that index. I've tried using subsearch and join but must not be using them correctly as no results are returned. What would be the correct way to do this?
I'm new to regex and having trouble extracting some text. My raw data is in the following format: ID=[12839829389-8b7e89opf][2839128391DJ33838PR] I need to extract the text between the first two br... See more...
I'm new to regex and having trouble extracting some text. My raw data is in the following format: ID=[12839829389-8b7e89opf][2839128391DJ33838PR] I need to extract the text between the first two brackets,12839829389-8b7e89opf, into a new field.    So far what I have does not work: | rex field=_raw "ID=[(?<id>.*)]" If anyone could help it would be greatly appreciated.
I have a SED command in props.conf as below  SEDCMD-replace-name = s/ethan/thomas/g   This will replace all ethan with thomas for sure and worked . But the question is if i want to keep the first... See more...
I have a SED command in props.conf as below  SEDCMD-replace-name = s/ethan/thomas/g   This will replace all ethan with thomas for sure and worked . But the question is if i want to keep the first one as not replaced and replace all second one till end , what should write the command .  I tried the below , 1) SEDCMD-replace-name = s/ethan/thomas/2g     Result - No replace is happening on anything  2) SEDCMD-replace-name = s/ethan/thomas/2   Result - replace is happening only on 2nd 'ethan' as 'thomas' .    Is there a way i can specify the range of numbers here such as 2 to 10 , or 2 to end like that ? please help. 
Sometimes our application dumps core (duh!), and we'd like the output of gdb -ex "bt full" -ex quit corefile to be forwarded to the Splunk-server, when this happens. Can the Forwarder do this -- i... See more...
Sometimes our application dumps core (duh!), and we'd like the output of gdb -ex "bt full" -ex quit corefile to be forwarded to the Splunk-server, when this happens. Can the Forwarder do this -- instead of trying to parse a file, invoke a command and forward its output -- or must we write our own forwarder?
I've been trying to find an _internal or _audit trail log event showing when a Splunk Diag was created on a given server but have been unable to find anything in those indexes nor any documentation a... See more...
I've been trying to find an _internal or _audit trail log event showing when a Splunk Diag was created on a given server but have been unable to find anything in those indexes nor any documentation around it...  Often for troubleshooting case tickets with Splunk support it becomes important to know when a diag was created on a server in the context of the timeline of the issue. The goal we have is to simply timechart critical events, including diag generation, by server/host so we can visualize what happened in what order.  Anyone have any experience with this?
Hey there Splunk community. I'm new here and I would appreciate some help if it is possible. I wrote a Python script that generates a XML file when you run it. However, when I run it through Splunk ... See more...
Hey there Splunk community. I'm new here and I would appreciate some help if it is possible. I wrote a Python script that generates a XML file when you run it. However, when I run it through Splunk I don't get the generated XML files as I usually do (when I run it in the console) in the same folder where the script is located. Where do those XML files go? I can't find them, Thanks!
In short, I have a router with an IP address on a virtual machine, and I need that when I receive a log that one of its interfaces has turned off, a trigger is triggered and my script runs. test... See more...
In short, I have a router with an IP address on a virtual machine, and I need that when I receive a log that one of its interfaces has turned off, a trigger is triggered and my script runs. test1.py from netmiko import ConnectHandler R1 = { "device_type": "cisco_ios", "host": "R1", "ip": "192.168.12.130", "username": "admin", "password": "admin1" } def main(): commands = ['int fa3/0', 'no sh' ] connect = ConnectHandler(**R1) connect.enable() output = connect.send_config_set(commands) print(f"\n\n-------------- Device {R1['ip']} --------------") print(output) print("-------------------- End -------------------") if __name__ == '__main__': main()   Login to splunk I get, the Add to Triggered Alerts trigger is triggered. But the .py file itself does not run. Checked through ".../splunk.exe cmd python .../test1.py " it starts and works. alert_actions.conf [test1] is_custom = 1 label = Change_interface_state description = Change_interface_state icon_path = test1.png alert.execute.cmd = test1.py app.conf [install] is_configured = 1 state = enabled [ui] is_visible = 1 label = test [launcher] author = QAZxsw description = This is custom version = 1.0.0   test1.html <from class="from-horizontal from-complex"> <p>Change state of interface</p> </from>     Help (._.)
Anyone have any sample of how the REST api is configured to connect to OpenDNS Umbrella?
Hi,  We have a scenario where we have three different events that should combine together based on Event ID.   Example  Event 1 Fields:  Hostname  Unique_ID  Has_Vulnerabilities   Event 2 Fi... See more...
Hi,  We have a scenario where we have three different events that should combine together based on Event ID.   Example  Event 1 Fields:  Hostname  Unique_ID  Has_Vulnerabilities   Event 2 Fields:  Scan_Date  Hostname_Unique_ID  Vulnerability_Id   Event 3 Fields:  Vulnerability_id  Description  Start_Date  …  What we are trying to do is, when I click on “Event 1 Unique_ID” to get all Vulnerabilities for Selected Host for “Event 2” enriched with selected data from “Event 3”.  All three events are in the same index but different sourcetype.   What is the best approach here? Subsearch seems slow If I go “Event 2” first and then filter to “Event 1”. I want another way around.  
I have a sourcetype that I have been trying to break my logs apart, but I keep getting:  Failed to parse timestamp:  Here is an example: [ logs ] CHARSET=UTF-8 EVENT_BREAKER_ENABLE=true LINE_BREA... See more...
I have a sourcetype that I have been trying to break my logs apart, but I keep getting:  Failed to parse timestamp:  Here is an example: [ logs ] CHARSET=UTF-8 EVENT_BREAKER_ENABLE=true LINE_BREAKER=([\r\n]+)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\,\d{3} MAX_EVENTS=135000 MAX_TIMESTAMP_LOOKAHEAD=23 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3N TRUNCATE=50000 TZ=America/New_York disabled=false pulldown_type=true   The logs look like they are broken correctly, but I still keep getting the error about the timestamp. Here is an example of the logs: 2022-04-25 11:28:17,743 ERROR [148] Method:C1222.MessageProcessor.ProcessResponseMessage -- String[] {Unexpected Exception: Internal Error - Unable to find Endpoint by ApTitle. - ApTitle: 2.16.124.113620.1.22.0.1.1.64.5541482OldDeviceAddress: x.xx.xxx.xxxxxx.x.xx.x.x.x.xx.xxxxxxx, Internal Error - Unable to find Endpoint by ApTitle.} Itron.Ami.Common.Logging.AmiException: Internal Error - Unable to find Endpoint by ApTitle. 2022-04-25 11:28:17,759 ERROR [148] Method:C1222.MessageProcessor.ProcessResponseMessage -- Unexpected System Exception: AmiException - Internal Error - Unable to find Endpoint by ApTitle. received - contact Application manager.
Hi, all   ・The log acquisition interval is 60 seconds ・When request_timeout is longer than the default 60 seconds Is there a possibility that log acquisition operations are duplicated as de... See more...
Hi, all   ・The log acquisition interval is 60 seconds ・When request_timeout is longer than the default 60 seconds Is there a possibility that log acquisition operations are duplicated as described below?      Example: When the retrieval interval is 60 seconds and the timeout is 90 seconds                 06:00:00 Acquisition Sample-A Starts       06:01:00 Acquisition Sample-B Starts       06:01:20 Acquisition Sample-A Ends  
I am running following query  where in the last I would like to fetch value of "Client" key from json and count all such clients. My query goes as follow: QUERY | rex ".*\"Client\":\"(?<Client>.*)\"... See more...
I am running following query  where in the last I would like to fetch value of "Client" key from json and count all such clients. My query goes as follow: QUERY | rex ".*\"Client\":\"(?<Client>.*)\"," | stats count byClient The events in query will definitely  has json as the one of the key, but order of the key may change. This extraction of Client from json is not working and I am getting Client as null .What is the problem here.My event looks as follow   Event type 1:       request-id : ABC Executing following method: Class.RestClass ::: with values: { "d1": "EU", "sn": "sn", "entityType": "USER", "email": "test@gmail.com", "id": [ "123" ], "Client": "TEST", "time": "2020-01-01T01:01:01Z", "List": [ { "Type": "Items1", "value": "-1", "match": "NO" } ] }     Event type 2:     request-id : 234 Execute something ::: with param-values: { "d1": "JP", "sn": "sn", "type": "USER", "user": "test1@gmail.com", "id": [ "123" ], "source": "S1", "Client": "test_client", "initiate": "init_Name", "mode": "Test", "t1": "", "t2": "", "auto": true, "list": [ { "type": "type_count", "value": "-1", "creteria": "skip" } ] }​     How can I correct my query to get the correct results:.
Hello Everyone, We are deploying EDR agents to all servers in our environment but I wonder if EDR agent creates any issue on Splunk components like Indexer,SH or Heavy forwarders. Has anyone instal... See more...
Hello Everyone, We are deploying EDR agents to all servers in our environment but I wonder if EDR agent creates any issue on Splunk components like Indexer,SH or Heavy forwarders. Has anyone install edr agent into the Splunk components which runs on centos 7 and have any problem? Kind Regards 
We built a custom app and deployed it in splunk.  It is writing logs in splunk/var/log/appname/appname.log.  I would like to find a way handle the logs natively within splunk, possibly using log.cfg ... See more...
We built a custom app and deployed it in splunk.  It is writing logs in splunk/var/log/appname/appname.log.  I would like to find a way handle the logs natively within splunk, possibly using log.cfg to roll the log at a certain size as well as retention.  I added a stanza to log.cfg in an attempt to manage this log but splunk doesnt appear to care about the added configs.  Anyone use this file in such a way?  Is it even possible?  Below is a snippet of the config I added to log.cfg for the app.     appender.appname=RollingFileAppender appender.appname.fileName=${SPLUNK_HOME}/var/log/appname/appname.log appender.appname.maxFileSize=25000000 # default: 25MB (specified in bytes). appender.appname.maxBackupIndex=5 appender.appname.layout=PatternLayout appender.appname.layout.ConversionPattern=%d{%m-%d-%Y %H:%M:%S.%l %z} %-5p %c - %m%n
Hello We recently migrated our CM to a new clean host. After migration almost everything is good but I have a few errors relating to masking changes and it seems to only be on a single host:     ... See more...
Hello We recently migrated our CM to a new clean host. After migration almost everything is good but I have a few errors relating to masking changes and it seems to only be on a single host:     04-26-2022 08:46:36.578 -0400 INFO CMRepJob - running job=CMChangeMasksJob guid=4769183B-D1C2-4906-AFE5-7A799E2A3B5D number-of-changes=30 genid=91958 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~220~4C96BDC6-0710-452F-9514-50C631A94286 transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 WARN CMMaster - mask change failed, reverting back bid=server_oracle~239~FBCEF346-BB10-406D-BD87-087A1DC6F5BF mask=0 searchState=Searchable status=Complete 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~239~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 WARN CMMaster - mask change failed, reverting back bid=windows~29197~FBCEF346-BB10-406D-BD87-087A1DC6F5BF mask=0 searchState=Searchable status=Complete 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows~29197~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 WARN CMMaster - mask change failed, reverting back bid=windows_events~3932~7AD6523B-A4F1-4B0E-8C77-511FD5FA2286 mask=0 searchState=Searchable status=Complete 04-26-2022 08:46:36.726 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows_events~3932~7AD6523B-A4F1-4B0E-8C77-511FD5FA2286 transitioning from=SearchablePendingMask to=Searchable oldmask=0xffffffffffffffff newmask=0x0 reason="mask change failed, reverting back" 04-26-2022 08:46:36.726 -0400 INFO CMMaster - event=commitGenerationFailure pendingGen=91958 requesterReason=changeBucketMasks failureReason='event=checkDirtyBuckets first unmet bid=cim_modactions~196~5678E53C-8999-4C69-8032-C55BDB86745E' 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows_events~3932~7AD6523B-A4F1-4B0E-8C77-511FD5FA2286 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=windows~29197~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~239~FBCEF346-BB10-406D-BD87-087A1DC6F5BF transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~220~4C96BDC6-0710-452F-9514-50C631A94286 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~215~F448A588-91B9-47D9-99DB-B1CE27CA51AA transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~203~7423238A-3907-4BA1-A8A6-8A9A126A6B21 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~49~BB39BC9E-D7DA-4934-8D4A-FC7DD9C982B4 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_oracle~2~A98C1984-B48A-4B58-8D83-B6D1FAA01F08 transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:37.724 -0400 INFO CMPeer - peer=4769183B-D1C2-4906-AFE5-7A799E2A3B5D peer_name=host01 bid=server_ad~20~AC4D9A8B-995F-4043-BBE1-1FD61BFA3BEB transitioning from=Searchable to=SearchablePendingMask oldmask=0x0 newmask=0xffffffffffffffff reason="fixup searchable mask" 04-26-2022 08:46:45.718 -0400 INFO CMMaster - event=commitGenerationFailure pendingGen=91958 requesterReason=changeBucketMasks failureReason='event=checkDirtyBuckets first unmet bid=cim_modactions~196~5678E53C-8999-4C69-8032-C55BDB86745E'         these issues are the only thing keeping our cluster from being completely migrated/fixed. Thanks for the help! Todd Waller
Hi All, In my dashboard, I have edit data option. For few multiselect input option the previous value is null, on edit when I select any new value I want to remove that null value from multiselec... See more...
Hi All, In my dashboard, I have edit data option. For few multiselect input option the previous value is null, on edit when I select any new value I want to remove that null value from multiselect data. I am using JavaScript to add/ edit records from UI. could you please help to deal with the null values. to remove null values on selection of new data. Thanks!
Hi: I have logs that is delimited by ||. I would like to extract nth value from each log and group them by value and count.  I am fairly new to Splunk. This is how far I have gotten.      index... See more...
Hi: I have logs that is delimited by ||. I would like to extract nth value from each log and group them by value and count.  I am fairly new to Splunk. This is how far I have gotten.      index=<index> INSERT OR UPDATE | eval fields=split(_raw,"||") | <WHAT DO I NEED HERE> | stats count by <field_value> | sort -count     My data     INSERT||"test Identifier"||"hostname"||"192.168.2.1"||"This is a test log"||....          
Hi All, Is there a way to determine how much data agents send to AppDynamics? Regards Charan ^ Post edited by @Ryan.Paredez for clarity and improved title for Searchability