All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey everyone. Need some help breaking a json event that is ingested in the current nested json format: [ { "title": "Bad Stuff", "count": 2, "matches": [ { "EventID"... See more...
Hey everyone. Need some help breaking a json event that is ingested in the current nested json format: [ { "title": "Bad Stuff", "count": 2, "matches": [ { "EventID": 13, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" }, { "EventID": 16, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" } ] }, { "title": "Next Bad Stuff", "count": 2, "matches": [ { "EventID": 14, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" }, { "EventID": 17, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" } ] } ]   Would like to break it into seperate events like this: { "title": "Bad Stuff", "count": 2, "EventID": 13, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" } { "title": "Bad Stuff", "count": 2, "EventID": 16, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" } { "title": "Next Bad Stuff", "count": 2, "EventID": 14, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" } { "title": "Next Bad Stuff", "count": 2, "EventID": 17, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" }   What would I need in my props.conf and transforms.conf to achieve this ?   Thanks in advanced splunk community ! Cheers.
Hi to all, my Splunk architecture consist of: 4 SH, 2 Indexer, 1 Deployment-Server (includes Cluster Master and Deployer).   I need to install an heavy forwarder but I don't have availables mac... See more...
Hi to all, my Splunk architecture consist of: 4 SH, 2 Indexer, 1 Deployment-Server (includes Cluster Master and Deployer).   I need to install an heavy forwarder but I don't have availables machines; where is better to install a second Splunk Enterprise instance (Heavy Forwarder)?   Thanks to all.
Hi, I want to know the list of event types and attributes used for ADQL queries. Thank you, Hemanth Kumar.
Hi,  I am creating a Dashboard panel via XML Classic method.  My query is quite straightforward as shown below. Issue is,  the panel is displaying all the results despite my xml code having count s... See more...
Hi,  I am creating a Dashboard panel via XML Classic method.  My query is quite straightforward as shown below. Issue is,  the panel is displaying all the results despite my xml code having count set to 5 ? Any idea why is it doing so and how to make Splunk limit results as per the count ? <title>Top 5 Countries Last 24 hours</title> <table> <search> <query>index=aws sourcetype="aws:waf" "httpRequest.country"!="-" | stats count by httpRequest.country </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">5</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> Result    
Hello, I am recieving the following warning in my app :   I have updated my app by the last option in this article :  https://www.splunk.com/en_us/blog/tips-and-tricks/html-dashboards-deprecat... See more...
Hello, I am recieving the following warning in my app :   I have updated my app by the last option in this article :  https://www.splunk.com/en_us/blog/tips-and-tricks/html-dashboards-deprecation.html The final option is to move your HTML files from /data/ui/html to /appserver/static/template and add a Single Page Application (SPA) view that specifies <view template="app-name:/static/template/path" type="html">.  But i still get the warning. There may be something wrong at my code but i dont know it. Please some one help !
Hi  I have for each event the open_time and update_time, I want to calculate the age of the event, like:  open_time               update_time           age 2022-03-26            2022-04-26 ... See more...
Hi  I have for each event the open_time and update_time, I want to calculate the age of the event, like:  open_time               update_time           age 2022-03-26            2022-04-26            1m 2022-04-22            2022-04-26             4d   any idea ? thanks
I have to prepare reporting dashboards in Splunk for which I used this query until now:   field1=GTIN_RECEIVED field2=NREC field3=*1234* field4=SNS NOT [search field1=MESSAGE_INVALID OR field1... See more...
I have to prepare reporting dashboards in Splunk for which I used this query until now:   field1=GTIN_RECEIVED field2=NREC field3=*1234* field4=SNS NOT [search field1=MESSAGE_INVALID OR field1=GTIN_INVALID field2=NREC OR field2=PRODUCER field3=*1234* field4=SNS | dedup field5 | fields field5 ] | dedup field5 | table field5 | rename field5 as gtin   The data size is huge now and the query takes too long to run which is becoming very difficult for me to generate dashboard.   Can someone pls help and simplify this query so that it takes minimal time.
Hello Splunk Community, I am facing this issue and was hoping if anyone could help me: In the Splunk datamodel, for the auto-extracted fields, there are some events whose fields are not being ext... See more...
Hello Splunk Community, I am facing this issue and was hoping if anyone could help me: In the Splunk datamodel, for the auto-extracted fields, there are some events whose fields are not being extracted. Majority of the events have their fields extracted but there are some 10-15 events whose fields are not being extracted properly. Any suggestions/ideas as to what is causing this discrepancy? Thanks!
I have a Threat Intelligence search that I would like to filter on based on results, so the scenario is if the Threat Activity is matched in the Network_Traffic datamodel then based on action = (allo... See more...
I have a Threat Intelligence search that I would like to filter on based on results, so the scenario is if the Threat Activity is matched in the Network_Traffic datamodel then based on action = (allowed, dropped or blocked) then the action should only send me the allowed traffic and filter out dropped or blocked traffic.      | from datamodel:"Threat_Intelligence"."Threat_Activity" | search NOT [| inputlookup local_intel_whitelist.csv | fields threat_collection_key, dest | table threat_collection_key, dest | format "(" "(" "OR" ")" "OR" ")" ] | append [| map search="search index=netfilter $threat_match_value$" | eval threat_action_value="found" | eval action="*" ] - this is the line I added. | dedup threat_match_field,threat_match_value | `get_event_id` | table _raw,event_id,source,src,dest,threat*,weight, orig_sourcetype, action | rename weight as record_weight | `per_panel_filter("ppf_threat_activity","threat_match_field,threat_match_value")` | `get_threat_attribution(threat_key)` | rename source_* as threat_source_*,description as threat_description | eval risk_score=case(isnum(record_weight), record_weight, isnum(weight), weight, 1=1, null()) | fields - *time | eval risk_object_type=case(threat_match_field="query" OR threat_match_field=="src" OR threat_match_field=="dest","system",threat_match_field=="src_user" OR threat_match_field=="user","user",1=1,"other") | eval risk_object=threat_match_value | dedup dest | eval urgency=if(threat_category=="_MISP", "medium" , "high")  
I ran this search on splunk cloud web and I got the results below. Can anyone help on how to resolve   index=_internal source=*/splunkforwarder/var/log/splunk/splunkd.log OR source=*SplunkUnivers... See more...
I ran this search on splunk cloud web and I got the results below. Can anyone help on how to resolve   index=_internal source=*/splunkforwarder/var/log/splunk/splunkd.log OR source=*SplunkUniversalForwarder\\var\\log\\splunk\\splunkd.log log_level=ERROR | transaction host component   1) 04-26-2022 13:27:26.944 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722) 04-26-2022 13:27:26.944 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::init: Failed to bind to DC, dc_bind_time=1031 msec 04-26-2022 13:27:27.959 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722) 04-26-2022 13:27:29.090 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722) 04-26-2022 13:27:29.715 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722)   2) 04-26-2022 09:38:13.402 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:38:43.312 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:39:13.173 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:39:43.118 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:40:12.952 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 3) 04-26-2022 08:27:54.691 -0700 ERROR PipelineComponent [6004 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?
Good Evening, The alert Splunk DoS via Malformed S2S Request has been constantly triggering on one specific system, but the universal fowarder on that machine is version 8.2.3.0 and our Splunk ES i... See more...
Good Evening, The alert Splunk DoS via Malformed S2S Request has been constantly triggering on one specific system, but the universal fowarder on that machine is version 8.2.3.0 and our Splunk ES is version 8.2.5. According to splunk this alert only affects version 7.3.8 and earlier, 8.0.0 - 8.0.8, and 8.1.0 - 8.1.2. Would there be another reason why this alert would trigger on one specific machine? Could certain processes cause this alert to trigger?
How do I configure deployment server where I have a main or master server that has apps it pushes to its clients and will push to a secondary server behind a firewall.   Then that server pushes apps ... See more...
How do I configure deployment server where I have a main or master server that has apps it pushes to its clients and will push to a secondary server behind a firewall.   Then that server pushes apps  to its set of clients Clients...
I have a query that returns a table of extracted IDs: index=my_index | rex field=_raw "ID=\[(?<id>.*\]\[.*\]" | table id I simply need to search the results of the above query under a different ind... See more...
I have a query that returns a table of extracted IDs: index=my_index | rex field=_raw "ID=\[(?<id>.*\]\[.*\]" | table id I simply need to search the results of the above query under a different index, then return a stats count by a field from that index. I've tried using subsearch and join but must not be using them correctly as no results are returned. What would be the correct way to do this?
I'm new to regex and having trouble extracting some text. My raw data is in the following format: ID=[12839829389-8b7e89opf][2839128391DJ33838PR] I need to extract the text between the first two br... See more...
I'm new to regex and having trouble extracting some text. My raw data is in the following format: ID=[12839829389-8b7e89opf][2839128391DJ33838PR] I need to extract the text between the first two brackets,12839829389-8b7e89opf, into a new field.    So far what I have does not work: | rex field=_raw "ID=[(?<id>.*)]" If anyone could help it would be greatly appreciated.
I have a SED command in props.conf as below  SEDCMD-replace-name = s/ethan/thomas/g   This will replace all ethan with thomas for sure and worked . But the question is if i want to keep the first... See more...
I have a SED command in props.conf as below  SEDCMD-replace-name = s/ethan/thomas/g   This will replace all ethan with thomas for sure and worked . But the question is if i want to keep the first one as not replaced and replace all second one till end , what should write the command .  I tried the below , 1) SEDCMD-replace-name = s/ethan/thomas/2g     Result - No replace is happening on anything  2) SEDCMD-replace-name = s/ethan/thomas/2   Result - replace is happening only on 2nd 'ethan' as 'thomas' .    Is there a way i can specify the range of numbers here such as 2 to 10 , or 2 to end like that ? please help. 
Sometimes our application dumps core (duh!), and we'd like the output of gdb -ex "bt full" -ex quit corefile to be forwarded to the Splunk-server, when this happens. Can the Forwarder do this -- i... See more...
Sometimes our application dumps core (duh!), and we'd like the output of gdb -ex "bt full" -ex quit corefile to be forwarded to the Splunk-server, when this happens. Can the Forwarder do this -- instead of trying to parse a file, invoke a command and forward its output -- or must we write our own forwarder?
I've been trying to find an _internal or _audit trail log event showing when a Splunk Diag was created on a given server but have been unable to find anything in those indexes nor any documentation a... See more...
I've been trying to find an _internal or _audit trail log event showing when a Splunk Diag was created on a given server but have been unable to find anything in those indexes nor any documentation around it...  Often for troubleshooting case tickets with Splunk support it becomes important to know when a diag was created on a server in the context of the timeline of the issue. The goal we have is to simply timechart critical events, including diag generation, by server/host so we can visualize what happened in what order.  Anyone have any experience with this?
Hey there Splunk community. I'm new here and I would appreciate some help if it is possible. I wrote a Python script that generates a XML file when you run it. However, when I run it through Splunk ... See more...
Hey there Splunk community. I'm new here and I would appreciate some help if it is possible. I wrote a Python script that generates a XML file when you run it. However, when I run it through Splunk I don't get the generated XML files as I usually do (when I run it in the console) in the same folder where the script is located. Where do those XML files go? I can't find them, Thanks!
In short, I have a router with an IP address on a virtual machine, and I need that when I receive a log that one of its interfaces has turned off, a trigger is triggered and my script runs. test... See more...
In short, I have a router with an IP address on a virtual machine, and I need that when I receive a log that one of its interfaces has turned off, a trigger is triggered and my script runs. test1.py from netmiko import ConnectHandler R1 = { "device_type": "cisco_ios", "host": "R1", "ip": "192.168.12.130", "username": "admin", "password": "admin1" } def main(): commands = ['int fa3/0', 'no sh' ] connect = ConnectHandler(**R1) connect.enable() output = connect.send_config_set(commands) print(f"\n\n-------------- Device {R1['ip']} --------------") print(output) print("-------------------- End -------------------") if __name__ == '__main__': main()   Login to splunk I get, the Add to Triggered Alerts trigger is triggered. But the .py file itself does not run. Checked through ".../splunk.exe cmd python .../test1.py " it starts and works. alert_actions.conf [test1] is_custom = 1 label = Change_interface_state description = Change_interface_state icon_path = test1.png alert.execute.cmd = test1.py app.conf [install] is_configured = 1 state = enabled [ui] is_visible = 1 label = test [launcher] author = QAZxsw description = This is custom version = 1.0.0   test1.html <from class="from-horizontal from-complex"> <p>Change state of interface</p> </from>     Help (._.)
Anyone have any sample of how the REST api is configured to connect to OpenDNS Umbrella?